Oct 02 18:20:40 crc systemd[1]: Starting Kubernetes Kubelet... Oct 02 18:20:41 crc restorecon[4657]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:41 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 18:20:42 crc restorecon[4657]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 02 18:20:42 crc restorecon[4657]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 02 18:20:44 crc kubenswrapper[4832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 02 18:20:44 crc kubenswrapper[4832]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 02 18:20:44 crc kubenswrapper[4832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 02 18:20:44 crc kubenswrapper[4832]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 02 18:20:44 crc kubenswrapper[4832]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 02 18:20:44 crc kubenswrapper[4832]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.605054 4832 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641424 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641475 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641481 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641487 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641492 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641497 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641503 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641507 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641511 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641515 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641518 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641522 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641526 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641531 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641535 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641538 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641542 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641546 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641550 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641554 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641558 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641562 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641566 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641571 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641575 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641580 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641584 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641589 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641593 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641598 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641602 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641607 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641612 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641617 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641621 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641632 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641636 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641642 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641647 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641651 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641655 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641659 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641663 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641667 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641671 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641676 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641679 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641684 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641688 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641692 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641695 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641699 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641705 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641709 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641713 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641717 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641720 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641724 4832 feature_gate.go:330] unrecognized feature gate: Example Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641728 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641731 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641735 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641740 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641744 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641750 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641754 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641759 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641766 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641771 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641775 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641779 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.641782 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641886 4832 flags.go:64] FLAG: --address="0.0.0.0" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641902 4832 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641909 4832 flags.go:64] FLAG: --anonymous-auth="true" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641916 4832 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641923 4832 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641927 4832 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641933 4832 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641939 4832 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641945 4832 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641950 4832 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641954 4832 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641961 4832 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641967 4832 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641974 4832 flags.go:64] FLAG: --cgroup-root="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641978 4832 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641983 4832 flags.go:64] FLAG: --client-ca-file="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641988 4832 flags.go:64] FLAG: --cloud-config="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641992 4832 flags.go:64] FLAG: --cloud-provider="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.641996 4832 flags.go:64] FLAG: --cluster-dns="[]" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642003 4832 flags.go:64] FLAG: --cluster-domain="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642007 4832 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642011 4832 flags.go:64] FLAG: --config-dir="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642016 4832 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642022 4832 flags.go:64] FLAG: --container-log-max-files="5" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642028 4832 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642032 4832 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642037 4832 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642042 4832 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642046 4832 flags.go:64] FLAG: --contention-profiling="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642051 4832 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642055 4832 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642059 4832 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642064 4832 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642069 4832 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642075 4832 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642079 4832 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642084 4832 flags.go:64] FLAG: --enable-load-reader="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642088 4832 flags.go:64] FLAG: --enable-server="true" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642093 4832 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642098 4832 flags.go:64] FLAG: --event-burst="100" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642103 4832 flags.go:64] FLAG: --event-qps="50" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642107 4832 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642111 4832 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642116 4832 flags.go:64] FLAG: --eviction-hard="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642122 4832 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642129 4832 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642134 4832 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642140 4832 flags.go:64] FLAG: --eviction-soft="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642144 4832 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642149 4832 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642153 4832 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642158 4832 flags.go:64] FLAG: --experimental-mounter-path="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642162 4832 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642167 4832 flags.go:64] FLAG: --fail-swap-on="true" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642173 4832 flags.go:64] FLAG: --feature-gates="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642180 4832 flags.go:64] FLAG: --file-check-frequency="20s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642185 4832 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642191 4832 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642196 4832 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642202 4832 flags.go:64] FLAG: --healthz-port="10248" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642207 4832 flags.go:64] FLAG: --help="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642213 4832 flags.go:64] FLAG: --hostname-override="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642218 4832 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642224 4832 flags.go:64] FLAG: --http-check-frequency="20s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642229 4832 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642236 4832 flags.go:64] FLAG: --image-credential-provider-config="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642242 4832 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642247 4832 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642252 4832 flags.go:64] FLAG: --image-service-endpoint="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642257 4832 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642283 4832 flags.go:64] FLAG: --kube-api-burst="100" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642289 4832 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642294 4832 flags.go:64] FLAG: --kube-api-qps="50" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642299 4832 flags.go:64] FLAG: --kube-reserved="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642306 4832 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642311 4832 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642317 4832 flags.go:64] FLAG: --kubelet-cgroups="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642326 4832 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642332 4832 flags.go:64] FLAG: --lock-file="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642338 4832 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642343 4832 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642348 4832 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642365 4832 flags.go:64] FLAG: --log-json-split-stream="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642372 4832 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642378 4832 flags.go:64] FLAG: --log-text-split-stream="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642383 4832 flags.go:64] FLAG: --logging-format="text" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642388 4832 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642393 4832 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642398 4832 flags.go:64] FLAG: --manifest-url="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642402 4832 flags.go:64] FLAG: --manifest-url-header="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642409 4832 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642414 4832 flags.go:64] FLAG: --max-open-files="1000000" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642420 4832 flags.go:64] FLAG: --max-pods="110" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642424 4832 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642429 4832 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642433 4832 flags.go:64] FLAG: --memory-manager-policy="None" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642439 4832 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642444 4832 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642450 4832 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642455 4832 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642471 4832 flags.go:64] FLAG: --node-status-max-images="50" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642477 4832 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642483 4832 flags.go:64] FLAG: --oom-score-adj="-999" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642489 4832 flags.go:64] FLAG: --pod-cidr="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642494 4832 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642503 4832 flags.go:64] FLAG: --pod-manifest-path="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642510 4832 flags.go:64] FLAG: --pod-max-pids="-1" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642515 4832 flags.go:64] FLAG: --pods-per-core="0" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642521 4832 flags.go:64] FLAG: --port="10250" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642526 4832 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642532 4832 flags.go:64] FLAG: --provider-id="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642538 4832 flags.go:64] FLAG: --qos-reserved="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642543 4832 flags.go:64] FLAG: --read-only-port="10255" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642548 4832 flags.go:64] FLAG: --register-node="true" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642554 4832 flags.go:64] FLAG: --register-schedulable="true" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642559 4832 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642570 4832 flags.go:64] FLAG: --registry-burst="10" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642576 4832 flags.go:64] FLAG: --registry-qps="5" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642581 4832 flags.go:64] FLAG: --reserved-cpus="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642588 4832 flags.go:64] FLAG: --reserved-memory="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642595 4832 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642601 4832 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642607 4832 flags.go:64] FLAG: --rotate-certificates="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642612 4832 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642618 4832 flags.go:64] FLAG: --runonce="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642623 4832 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642629 4832 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642636 4832 flags.go:64] FLAG: --seccomp-default="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642642 4832 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642647 4832 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642653 4832 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642659 4832 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642666 4832 flags.go:64] FLAG: --storage-driver-password="root" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642671 4832 flags.go:64] FLAG: --storage-driver-secure="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642676 4832 flags.go:64] FLAG: --storage-driver-table="stats" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642682 4832 flags.go:64] FLAG: --storage-driver-user="root" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642687 4832 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642693 4832 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642699 4832 flags.go:64] FLAG: --system-cgroups="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642704 4832 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642714 4832 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642719 4832 flags.go:64] FLAG: --tls-cert-file="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642726 4832 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642734 4832 flags.go:64] FLAG: --tls-min-version="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642740 4832 flags.go:64] FLAG: --tls-private-key-file="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642745 4832 flags.go:64] FLAG: --topology-manager-policy="none" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642750 4832 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642756 4832 flags.go:64] FLAG: --topology-manager-scope="container" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642762 4832 flags.go:64] FLAG: --v="2" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642770 4832 flags.go:64] FLAG: --version="false" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642778 4832 flags.go:64] FLAG: --vmodule="" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642784 4832 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.642791 4832 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.642971 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.642980 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.642986 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.642990 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.642994 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.642998 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643003 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643007 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643011 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643015 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643018 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643024 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643028 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643032 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643036 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643040 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643044 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643047 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643051 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643055 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643058 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643063 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643067 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643070 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643074 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643078 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643081 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643085 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643088 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643092 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643096 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643099 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643103 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643106 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643110 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643113 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643118 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643123 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643128 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643133 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643136 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643140 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643143 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643147 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643151 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643154 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643158 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643161 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643165 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643169 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643174 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643178 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643183 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643199 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643204 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643208 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643212 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643217 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643221 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643225 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643229 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643233 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643237 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643240 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643245 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643249 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643253 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643257 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643263 4832 feature_gate.go:330] unrecognized feature gate: Example Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643287 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.643291 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.643307 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.654203 4832 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.654263 4832 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654383 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654394 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654399 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654403 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654408 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654412 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654417 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654424 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654433 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654439 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654444 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654448 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654453 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654456 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654461 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654466 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654470 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654474 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654478 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654482 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654486 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654490 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654494 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654498 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654501 4832 feature_gate.go:330] unrecognized feature gate: Example Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654505 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654510 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654514 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654519 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654524 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654528 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654532 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654535 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654539 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654544 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654548 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654552 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654556 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654560 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654564 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654568 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654572 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654575 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654578 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654582 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654585 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654589 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654592 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654596 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654599 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654603 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654607 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654610 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654614 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654617 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654620 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654624 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654628 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654635 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654639 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654643 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654648 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654652 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654657 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654662 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654667 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654671 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654675 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654679 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654682 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654687 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.654694 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654817 4832 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654825 4832 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654829 4832 feature_gate.go:330] unrecognized feature gate: Example Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654833 4832 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654837 4832 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654840 4832 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654844 4832 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654847 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654851 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654855 4832 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654859 4832 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654862 4832 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654866 4832 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654869 4832 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654873 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654877 4832 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654880 4832 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654884 4832 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654888 4832 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654892 4832 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654897 4832 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654902 4832 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654934 4832 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654939 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654943 4832 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654947 4832 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654951 4832 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654954 4832 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654958 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654961 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654965 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654969 4832 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654972 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654976 4832 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654981 4832 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654985 4832 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654988 4832 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654992 4832 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.654996 4832 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655000 4832 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655003 4832 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655007 4832 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655012 4832 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655017 4832 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655021 4832 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655026 4832 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655030 4832 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655034 4832 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655039 4832 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655042 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655047 4832 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655051 4832 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655055 4832 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655059 4832 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655063 4832 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655067 4832 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655070 4832 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655074 4832 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655077 4832 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655081 4832 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655085 4832 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655089 4832 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655094 4832 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655098 4832 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655101 4832 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655105 4832 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655109 4832 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655113 4832 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655117 4832 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655120 4832 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 02 18:20:44 crc kubenswrapper[4832]: W1002 18:20:44.655124 4832 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.655131 4832 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.655394 4832 server.go:940] "Client rotation is on, will bootstrap in background" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.661427 4832 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.661519 4832 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.664691 4832 server.go:997] "Starting client certificate rotation" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.664724 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.667978 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-05 20:07:56.2513543 +0000 UTC Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.668067 4832 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 2281h47m11.583289781s for next certificate rotation Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.829498 4832 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.832376 4832 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.885033 4832 log.go:25] "Validated CRI v1 runtime API" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.976510 4832 log.go:25] "Validated CRI v1 image API" Oct 02 18:20:44 crc kubenswrapper[4832]: I1002 18:20:44.978971 4832 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.010992 4832 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-02-18-13-40-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.011088 4832 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.043333 4832 manager.go:217] Machine: {Timestamp:2025-10-02 18:20:45.036770537 +0000 UTC m=+2.006213509 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799886 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a BootID:7a67a654-6d49-4a75-b64e-12cb73cb5c72 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a1:6c:b2 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a1:6c:b2 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:82:f5:cf Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f0:66:07 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d0:5f:c4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:41:57:e1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:96:33:1c:69:5a:eb Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0e:3d:de:22:1b:aa Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.043787 4832 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.044045 4832 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.044496 4832 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.044817 4832 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.044877 4832 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.046016 4832 topology_manager.go:138] "Creating topology manager with none policy" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.046045 4832 container_manager_linux.go:303] "Creating device plugin manager" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.046528 4832 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.046558 4832 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.046811 4832 state_mem.go:36] "Initialized new in-memory state store" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.046946 4832 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.075481 4832 kubelet.go:418] "Attempting to sync node with API server" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.075550 4832 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.075581 4832 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.075613 4832 kubelet.go:324] "Adding apiserver pod source" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.075634 4832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 02 18:20:45 crc kubenswrapper[4832]: W1002 18:20:45.086149 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.086256 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Oct 02 18:20:45 crc kubenswrapper[4832]: W1002 18:20:45.086239 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.086377 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.096887 4832 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.098243 4832 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.114668 4832 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.118381 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.118426 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.118440 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.118453 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.118474 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.118488 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.118500 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.118521 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.118534 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.118550 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.118579 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.118592 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.120674 4832 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.121240 4832 server.go:1280] "Started kubelet" Oct 02 18:20:45 crc systemd[1]: Started Kubernetes Kubelet. Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.125923 4832 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.125946 4832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.126108 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.126632 4832 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.135853 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.135916 4832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.137022 4832 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.137206 4832 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.137224 4832 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.137284 4832 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.137020 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:11:08.507803468 +0000 UTC Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.137499 4832 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1682h50m23.370315868s for next certificate rotation Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.137665 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="200ms" Oct 02 18:20:45 crc kubenswrapper[4832]: W1002 18:20:45.137818 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.137909 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.139363 4832 factory.go:153] Registering CRI-O factory Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.139401 4832 factory.go:221] Registration of the crio container factory successfully Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.139948 4832 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.139979 4832 factory.go:55] Registering systemd factory Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.139993 4832 factory.go:221] Registration of the systemd container factory successfully Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.140041 4832 factory.go:103] Registering Raw factory Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.140068 4832 manager.go:1196] Started watching for new ooms in manager Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.141128 4832 manager.go:319] Starting recovery of all containers Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.154443 4832 server.go:460] "Adding debug handlers to kubelet server" Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.147815 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.180:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186abf939ae20563 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-02 18:20:45.121201507 +0000 UTC m=+2.090644389,LastTimestamp:2025-10-02 18:20:45.121201507 +0000 UTC m=+2.090644389,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183054 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183183 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183201 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183242 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183285 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183302 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183315 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183335 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183354 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183367 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183386 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183401 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183411 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183425 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183466 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183480 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183491 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183501 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183511 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183524 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183535 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183547 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183562 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183601 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183615 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183628 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183700 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183720 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183733 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183746 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183758 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183773 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183784 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183797 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183809 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183821 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183832 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183846 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183857 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183867 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183878 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183891 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183908 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183920 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183932 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183943 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183954 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183969 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183979 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.183991 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184001 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184018 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184036 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184049 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184064 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184083 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184097 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184111 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184126 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184140 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184153 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184168 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184182 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184195 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184208 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184220 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184231 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184243 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184254 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184288 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184303 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184315 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184327 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184340 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184354 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184366 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184379 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184392 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184405 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184417 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184431 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184443 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184455 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184472 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184484 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184498 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184511 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184524 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184536 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184551 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184565 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184577 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184593 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184605 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184618 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184635 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184649 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184663 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184677 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184688 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184700 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184713 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184725 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184737 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184757 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184774 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184791 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184806 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184822 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184835 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184849 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184862 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184875 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184885 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.184897 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.187730 4832 manager.go:324] Recovery completed Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.187866 4832 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.187906 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.187923 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.187938 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.187954 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.187968 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.187983 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.187998 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188012 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188026 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188040 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188054 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188066 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188078 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188092 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188105 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188118 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188131 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188143 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188155 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188167 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188180 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188193 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188206 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188222 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188235 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188248 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188264 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188294 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188309 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188324 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188336 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188349 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188361 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188376 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188389 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188402 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188414 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188426 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188440 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188453 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188466 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188479 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188494 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188507 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188520 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188533 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188546 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188560 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188585 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188603 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188617 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188630 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188648 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188660 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188675 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188687 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188729 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188755 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188777 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188792 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188814 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188825 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188838 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188850 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188862 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188874 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188889 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188906 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188918 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188935 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188967 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188981 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.188994 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189009 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189021 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189033 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189057 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189070 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189081 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189093 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189105 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189116 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189129 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189141 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189156 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189169 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189184 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189195 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189209 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189224 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189237 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189251 4832 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189284 4832 reconstruct.go:97] "Volume reconstruction finished" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.189293 4832 reconciler.go:26] "Reconciler: start to sync state" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.205169 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.208210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.208528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.208591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.210994 4832 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.211063 4832 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.211159 4832 state_mem.go:36] "Initialized new in-memory state store" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.219870 4832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.221439 4832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.221506 4832 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.221540 4832 kubelet.go:2335] "Starting kubelet main sync loop" Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.221735 4832 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 02 18:20:45 crc kubenswrapper[4832]: W1002 18:20:45.222578 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.222661 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.238282 4832 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.320483 4832 policy_none.go:49] "None policy: Start" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.321962 4832 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.322005 4832 state_mem.go:35] "Initializing new in-memory state store" Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.321977 4832 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.338431 4832 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.338867 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="400ms" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.403193 4832 manager.go:334] "Starting Device Plugin manager" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.403249 4832 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.403279 4832 server.go:79] "Starting device plugin registration server" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.403704 4832 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.403720 4832 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.403903 4832 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.404337 4832 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.404353 4832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.412349 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.504358 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.507497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.507555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.507567 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.507598 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.508189 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.522325 4832 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.522490 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.523885 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.523932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.523946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.524115 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.524462 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.524589 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.525197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.525241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.525257 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.525486 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.525635 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.525681 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.525829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.525862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.525870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.526291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.526317 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.526334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.526512 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.526621 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.526664 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.526980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.527006 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.527017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.527322 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.527348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.527359 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.527486 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.527983 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.528021 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.528538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.528572 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.528584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.528786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.528806 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.528816 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.529122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.529146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.529156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.529316 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.529343 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.530159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.530185 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.530198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595481 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595553 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595592 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595617 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595640 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595665 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595688 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595712 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595734 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595757 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595778 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595800 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595823 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595845 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.595900 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697439 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697525 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697561 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697583 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697602 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697625 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697669 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697712 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697759 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697794 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697783 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697829 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697730 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697773 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697867 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697962 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.697914 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.698036 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.698082 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.698044 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.698156 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.698173 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.698115 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.698108 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.698228 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.698293 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.698312 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.698445 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.709226 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.711004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.711072 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.711086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.711123 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.711854 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Oct 02 18:20:45 crc kubenswrapper[4832]: E1002 18:20:45.739934 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="800ms" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.852257 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.874166 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.882585 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.905279 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:45 crc kubenswrapper[4832]: I1002 18:20:45.910011 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:20:46 crc kubenswrapper[4832]: W1002 18:20:46.020649 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:46 crc kubenswrapper[4832]: E1002 18:20:46.020767 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.112918 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.114543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.114594 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.114606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.114639 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 18:20:46 crc kubenswrapper[4832]: E1002 18:20:46.115251 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.127081 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:46 crc kubenswrapper[4832]: W1002 18:20:46.171982 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0c6db2163fd774afd3d607689855fd501872f6d7480bb73874041478c372b4dc WatchSource:0}: Error finding container 0c6db2163fd774afd3d607689855fd501872f6d7480bb73874041478c372b4dc: Status 404 returned error can't find the container with id 0c6db2163fd774afd3d607689855fd501872f6d7480bb73874041478c372b4dc Oct 02 18:20:46 crc kubenswrapper[4832]: W1002 18:20:46.174140 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-b27350d8d8037d28cf8f7186e3fff2bba37dcfd669e00c330eb1ad065c0f0a28 WatchSource:0}: Error finding container b27350d8d8037d28cf8f7186e3fff2bba37dcfd669e00c330eb1ad065c0f0a28: Status 404 returned error can't find the container with id b27350d8d8037d28cf8f7186e3fff2bba37dcfd669e00c330eb1ad065c0f0a28 Oct 02 18:20:46 crc kubenswrapper[4832]: W1002 18:20:46.175576 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4f5a68eb4414d18a1ac23a92e375829a726fa4ea99ec6c08a52979c89f260020 WatchSource:0}: Error finding container 4f5a68eb4414d18a1ac23a92e375829a726fa4ea99ec6c08a52979c89f260020: Status 404 returned error can't find the container with id 4f5a68eb4414d18a1ac23a92e375829a726fa4ea99ec6c08a52979c89f260020 Oct 02 18:20:46 crc kubenswrapper[4832]: W1002 18:20:46.177781 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-263af0d302b8de84f1b763c27b470b27e0d235f63c30cf144e7995914355e3c6 WatchSource:0}: Error finding container 263af0d302b8de84f1b763c27b470b27e0d235f63c30cf144e7995914355e3c6: Status 404 returned error can't find the container with id 263af0d302b8de84f1b763c27b470b27e0d235f63c30cf144e7995914355e3c6 Oct 02 18:20:46 crc kubenswrapper[4832]: W1002 18:20:46.180747 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-fae5d966d54c91466709b33772169cd9ebf6c1a01689661073ffb20a69af984d WatchSource:0}: Error finding container fae5d966d54c91466709b33772169cd9ebf6c1a01689661073ffb20a69af984d: Status 404 returned error can't find the container with id fae5d966d54c91466709b33772169cd9ebf6c1a01689661073ffb20a69af984d Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.227373 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4f5a68eb4414d18a1ac23a92e375829a726fa4ea99ec6c08a52979c89f260020"} Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.228773 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fae5d966d54c91466709b33772169cd9ebf6c1a01689661073ffb20a69af984d"} Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.230001 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"263af0d302b8de84f1b763c27b470b27e0d235f63c30cf144e7995914355e3c6"} Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.231307 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0c6db2163fd774afd3d607689855fd501872f6d7480bb73874041478c372b4dc"} Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.233222 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b27350d8d8037d28cf8f7186e3fff2bba37dcfd669e00c330eb1ad065c0f0a28"} Oct 02 18:20:46 crc kubenswrapper[4832]: W1002 18:20:46.320373 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:46 crc kubenswrapper[4832]: E1002 18:20:46.320521 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Oct 02 18:20:46 crc kubenswrapper[4832]: W1002 18:20:46.339098 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:46 crc kubenswrapper[4832]: E1002 18:20:46.339217 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Oct 02 18:20:46 crc kubenswrapper[4832]: E1002 18:20:46.541189 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="1.6s" Oct 02 18:20:46 crc kubenswrapper[4832]: W1002 18:20:46.652144 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:46 crc kubenswrapper[4832]: E1002 18:20:46.652307 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.915599 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.917376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.917432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.917450 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:46 crc kubenswrapper[4832]: I1002 18:20:46.917484 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 18:20:46 crc kubenswrapper[4832]: E1002 18:20:46.917975 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Oct 02 18:20:47 crc kubenswrapper[4832]: I1002 18:20:47.126999 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:48 crc kubenswrapper[4832]: I1002 18:20:48.127627 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:48 crc kubenswrapper[4832]: E1002 18:20:48.142331 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="3.2s" Oct 02 18:20:48 crc kubenswrapper[4832]: W1002 18:20:48.192353 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:48 crc kubenswrapper[4832]: E1002 18:20:48.192449 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Oct 02 18:20:48 crc kubenswrapper[4832]: I1002 18:20:48.519216 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:48 crc kubenswrapper[4832]: I1002 18:20:48.520964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:48 crc kubenswrapper[4832]: I1002 18:20:48.521060 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:48 crc kubenswrapper[4832]: I1002 18:20:48.521079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:48 crc kubenswrapper[4832]: I1002 18:20:48.521115 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 18:20:48 crc kubenswrapper[4832]: E1002 18:20:48.521737 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Oct 02 18:20:48 crc kubenswrapper[4832]: W1002 18:20:48.918298 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:48 crc kubenswrapper[4832]: E1002 18:20:48.918987 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.127529 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.243381 4832 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751" exitCode=0 Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.243515 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.243484 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751"} Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.244844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.244877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.244887 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.247144 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152" exitCode=0 Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.247297 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152"} Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.247383 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.249141 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b2af1abe7931ccc68b9080297cf7d4e34246373f8e6db373af8eb1611d5ebb0e" exitCode=0 Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.249207 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b2af1abe7931ccc68b9080297cf7d4e34246373f8e6db373af8eb1611d5ebb0e"} Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.249292 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.249314 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.249326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.249394 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.250484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.250543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.250566 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.251027 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.251870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.251901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.251911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.253243 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e"} Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.255453 4832 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="7f87f754a90e154d34aa82089dec8b9490b1652ddd3c6e79a2e6e89efa5667b9" exitCode=0 Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.255492 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"7f87f754a90e154d34aa82089dec8b9490b1652ddd3c6e79a2e6e89efa5667b9"} Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.255559 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.256336 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.256370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:49 crc kubenswrapper[4832]: I1002 18:20:49.256380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:49 crc kubenswrapper[4832]: W1002 18:20:49.335097 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:49 crc kubenswrapper[4832]: E1002 18:20:49.335221 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Oct 02 18:20:49 crc kubenswrapper[4832]: W1002 18:20:49.680493 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:49 crc kubenswrapper[4832]: E1002 18:20:49.680604 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.127496 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.261148 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"18857f0c558d210c90f62d12a2fe44432c0e8d56c9a884ef7f8aba75b4b3803b"} Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.264471 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd"} Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.267237 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c502d61d3008a2a23c0c55b45c8b42bd7c3d3faeb9988f77041c917b503c97c5" exitCode=0 Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.267334 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c502d61d3008a2a23c0c55b45c8b42bd7c3d3faeb9988f77041c917b503c97c5"} Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.267381 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.268326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.268352 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.268362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.272132 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0"} Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.274040 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6fb52e75f1f63ead7516b0cd983bf9fb364cc0f67336eb59513bdfddcb7f803d"} Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.274141 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.275251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.275311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:50 crc kubenswrapper[4832]: I1002 18:20:50.275326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.127258 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.279360 4832 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="252551d444e623579b1f83f408e45632529c41c88c05995ee8484091a28b9601" exitCode=0 Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.279421 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"252551d444e623579b1f83f408e45632529c41c88c05995ee8484091a28b9601"} Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.279527 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.280883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.280933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.280954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.283492 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06"} Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.283841 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.283880 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51"} Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.285412 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.285440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.285453 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.290029 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cf6606019b3529d235c1ace6b9f28f053785daa0770d8553f85a24fddfe15d7c"} Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.290102 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"65f43619bd98316172da519f42b12be3b52f40cb038dbf9228b7b5168373c682"} Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.292892 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2"} Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.292936 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.293880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.293929 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.293942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:51 crc kubenswrapper[4832]: E1002 18:20:51.344218 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.180:6443: connect: connection refused" interval="6.4s" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.722142 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.724100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.724150 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.724159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:51 crc kubenswrapper[4832]: I1002 18:20:51.724187 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 18:20:51 crc kubenswrapper[4832]: E1002 18:20:51.724958 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.180:6443: connect: connection refused" node="crc" Oct 02 18:20:52 crc kubenswrapper[4832]: I1002 18:20:52.128029 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:52 crc kubenswrapper[4832]: I1002 18:20:52.307304 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca"} Oct 02 18:20:52 crc kubenswrapper[4832]: I1002 18:20:52.310494 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"66c7e1b65fdc4b5cafb2996ffd4a0e7d30ef1d1d105c7332d9a36b3105ba8a31"} Oct 02 18:20:52 crc kubenswrapper[4832]: I1002 18:20:52.310577 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:52 crc kubenswrapper[4832]: I1002 18:20:52.310673 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:52 crc kubenswrapper[4832]: I1002 18:20:52.311845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:52 crc kubenswrapper[4832]: I1002 18:20:52.311883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:52 crc kubenswrapper[4832]: I1002 18:20:52.311895 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:52 crc kubenswrapper[4832]: I1002 18:20:52.312442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:52 crc kubenswrapper[4832]: I1002 18:20:52.312486 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:52 crc kubenswrapper[4832]: I1002 18:20:52.312497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:52 crc kubenswrapper[4832]: W1002 18:20:52.373773 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:52 crc kubenswrapper[4832]: E1002 18:20:52.373913 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Oct 02 18:20:53 crc kubenswrapper[4832]: I1002 18:20:53.127237 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:53 crc kubenswrapper[4832]: I1002 18:20:53.139512 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:20:53 crc kubenswrapper[4832]: I1002 18:20:53.316254 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f9b6736f9bfc55bb1c26aec3a83fe6043fd2132ac492ee4bbe8941a6ad016c8b"} Oct 02 18:20:53 crc kubenswrapper[4832]: I1002 18:20:53.316649 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:53 crc kubenswrapper[4832]: I1002 18:20:53.317098 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe"} Oct 02 18:20:53 crc kubenswrapper[4832]: I1002 18:20:53.318437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:53 crc kubenswrapper[4832]: I1002 18:20:53.324905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:53 crc kubenswrapper[4832]: I1002 18:20:53.325121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:53 crc kubenswrapper[4832]: I1002 18:20:53.327825 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:53 crc kubenswrapper[4832]: I1002 18:20:53.328613 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7d93e13e76256a8afdcfea5649cffa167c25d3dc4c445da792844e4850ea3ac9"} Oct 02 18:20:53 crc kubenswrapper[4832]: I1002 18:20:53.328664 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"33ecaa6f5a7c6aa2f2832760d9db34f888813a0ce8b933e401c732f402ad869d"} Oct 02 18:20:53 crc kubenswrapper[4832]: I1002 18:20:53.329738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:53 crc kubenswrapper[4832]: I1002 18:20:53.329793 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:53 crc kubenswrapper[4832]: I1002 18:20:53.329809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:53 crc kubenswrapper[4832]: E1002 18:20:53.822608 4832 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.180:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186abf939ae20563 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-02 18:20:45.121201507 +0000 UTC m=+2.090644389,LastTimestamp:2025-10-02 18:20:45.121201507 +0000 UTC m=+2.090644389,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.127610 4832 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.337482 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"26b41f12a13b1258736c1b2b268fd9d87a06e98304be6d5293b67411971ff4f4"} Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.337561 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1c81eada526d32cc65ad9cad76d4eeec3ec202c4f47eb657141b8042bc1b5b93"} Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.337684 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.339312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.339368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.339379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.340436 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.342340 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f9b6736f9bfc55bb1c26aec3a83fe6043fd2132ac492ee4bbe8941a6ad016c8b" exitCode=255 Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.342374 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f9b6736f9bfc55bb1c26aec3a83fe6043fd2132ac492ee4bbe8941a6ad016c8b"} Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.342539 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.343881 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.343935 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.343956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.344852 4832 scope.go:117] "RemoveContainer" containerID="f9b6736f9bfc55bb1c26aec3a83fe6043fd2132ac492ee4bbe8941a6ad016c8b" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.348520 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.348689 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.349941 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.349981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.349993 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:54 crc kubenswrapper[4832]: W1002 18:20:54.430107 4832 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.180:6443: connect: connection refused Oct 02 18:20:54 crc kubenswrapper[4832]: E1002 18:20:54.430213 4832 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.180:6443: connect: connection refused" logger="UnhandledError" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.859183 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.978767 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.979217 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Oct 02 18:20:54 crc kubenswrapper[4832]: I1002 18:20:54.979306 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.348327 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.350230 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263"} Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.350411 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.350516 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.350549 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.350508 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.351587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.351614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.351627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.351749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.351786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.351810 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.351986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.352009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.352021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:55 crc kubenswrapper[4832]: E1002 18:20:55.412489 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 02 18:20:55 crc kubenswrapper[4832]: I1002 18:20:55.683765 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 02 18:20:56 crc kubenswrapper[4832]: I1002 18:20:56.071126 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:56 crc kubenswrapper[4832]: I1002 18:20:56.357658 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:56 crc kubenswrapper[4832]: I1002 18:20:56.357731 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:56 crc kubenswrapper[4832]: I1002 18:20:56.357780 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:20:56 crc kubenswrapper[4832]: I1002 18:20:56.360127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:56 crc kubenswrapper[4832]: I1002 18:20:56.360177 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:56 crc kubenswrapper[4832]: I1002 18:20:56.360189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:56 crc kubenswrapper[4832]: I1002 18:20:56.360205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:56 crc kubenswrapper[4832]: I1002 18:20:56.360258 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:56 crc kubenswrapper[4832]: I1002 18:20:56.360290 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:57 crc kubenswrapper[4832]: I1002 18:20:57.360634 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:57 crc kubenswrapper[4832]: I1002 18:20:57.361875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:57 crc kubenswrapper[4832]: I1002 18:20:57.361934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:57 crc kubenswrapper[4832]: I1002 18:20:57.361946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:57 crc kubenswrapper[4832]: I1002 18:20:57.860213 4832 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 02 18:20:57 crc kubenswrapper[4832]: I1002 18:20:57.860594 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 02 18:20:58 crc kubenswrapper[4832]: I1002 18:20:58.125401 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:58 crc kubenswrapper[4832]: I1002 18:20:58.127071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:58 crc kubenswrapper[4832]: I1002 18:20:58.127115 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:58 crc kubenswrapper[4832]: I1002 18:20:58.127127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:20:58 crc kubenswrapper[4832]: I1002 18:20:58.127157 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 18:20:59 crc kubenswrapper[4832]: I1002 18:20:59.325255 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 18:20:59 crc kubenswrapper[4832]: I1002 18:20:59.325579 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:20:59 crc kubenswrapper[4832]: I1002 18:20:59.327289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:20:59 crc kubenswrapper[4832]: I1002 18:20:59.327329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:20:59 crc kubenswrapper[4832]: I1002 18:20:59.327340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:00 crc kubenswrapper[4832]: I1002 18:21:00.296642 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:21:00 crc kubenswrapper[4832]: I1002 18:21:00.296967 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:21:00 crc kubenswrapper[4832]: I1002 18:21:00.299042 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:00 crc kubenswrapper[4832]: I1002 18:21:00.299215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:00 crc kubenswrapper[4832]: I1002 18:21:00.299322 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:00 crc kubenswrapper[4832]: I1002 18:21:00.306217 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:21:00 crc kubenswrapper[4832]: I1002 18:21:00.370296 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:21:00 crc kubenswrapper[4832]: I1002 18:21:00.372031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:00 crc kubenswrapper[4832]: I1002 18:21:00.372102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:00 crc kubenswrapper[4832]: I1002 18:21:00.372127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:00 crc kubenswrapper[4832]: I1002 18:21:00.375543 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:21:01 crc kubenswrapper[4832]: I1002 18:21:01.374154 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:21:01 crc kubenswrapper[4832]: I1002 18:21:01.375639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:01 crc kubenswrapper[4832]: I1002 18:21:01.375676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:01 crc kubenswrapper[4832]: I1002 18:21:01.375687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:03 crc kubenswrapper[4832]: I1002 18:21:03.877414 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 02 18:21:03 crc kubenswrapper[4832]: I1002 18:21:03.877503 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 02 18:21:03 crc kubenswrapper[4832]: I1002 18:21:03.982673 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 02 18:21:03 crc kubenswrapper[4832]: I1002 18:21:03.982895 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:21:03 crc kubenswrapper[4832]: I1002 18:21:03.984669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:03 crc kubenswrapper[4832]: I1002 18:21:03.984736 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:03 crc kubenswrapper[4832]: I1002 18:21:03.984753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:04 crc kubenswrapper[4832]: I1002 18:21:04.017641 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 02 18:21:04 crc kubenswrapper[4832]: I1002 18:21:04.382876 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:21:04 crc kubenswrapper[4832]: I1002 18:21:04.384047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:04 crc kubenswrapper[4832]: I1002 18:21:04.384087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:04 crc kubenswrapper[4832]: I1002 18:21:04.384097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:04 crc kubenswrapper[4832]: I1002 18:21:04.397967 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 02 18:21:04 crc kubenswrapper[4832]: I1002 18:21:04.983700 4832 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]log ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]etcd ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/generic-apiserver-start-informers ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/priority-and-fairness-filter ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/start-apiextensions-informers ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/start-apiextensions-controllers ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/crd-informer-synced ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/start-system-namespaces-controller ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 02 18:21:04 crc kubenswrapper[4832]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/bootstrap-controller ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/start-kube-aggregator-informers ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/apiservice-registration-controller ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/apiservice-discovery-controller ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]autoregister-completion ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/apiservice-openapi-controller ok Oct 02 18:21:04 crc kubenswrapper[4832]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 02 18:21:04 crc kubenswrapper[4832]: livez check failed Oct 02 18:21:04 crc kubenswrapper[4832]: I1002 18:21:04.983765 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 18:21:05 crc kubenswrapper[4832]: I1002 18:21:05.385685 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:21:05 crc kubenswrapper[4832]: I1002 18:21:05.386683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:05 crc kubenswrapper[4832]: I1002 18:21:05.386758 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:05 crc kubenswrapper[4832]: I1002 18:21:05.386782 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:05 crc kubenswrapper[4832]: E1002 18:21:05.412592 4832 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 02 18:21:07 crc kubenswrapper[4832]: I1002 18:21:07.860613 4832 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 02 18:21:07 crc kubenswrapper[4832]: I1002 18:21:07.861431 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 02 18:21:08 crc kubenswrapper[4832]: E1002 18:21:08.883847 4832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Oct 02 18:21:08 crc kubenswrapper[4832]: I1002 18:21:08.888057 4832 trace.go:236] Trace[1813550239]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Oct-2025 18:20:55.530) (total time: 13353ms): Oct 02 18:21:08 crc kubenswrapper[4832]: Trace[1813550239]: ---"Objects listed" error: 13353ms (18:21:08.884) Oct 02 18:21:08 crc kubenswrapper[4832]: Trace[1813550239]: [13.353895506s] [13.353895506s] END Oct 02 18:21:08 crc kubenswrapper[4832]: I1002 18:21:08.888106 4832 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 02 18:21:08 crc kubenswrapper[4832]: I1002 18:21:08.888597 4832 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 02 18:21:08 crc kubenswrapper[4832]: E1002 18:21:08.888811 4832 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 02 18:21:08 crc kubenswrapper[4832]: I1002 18:21:08.891134 4832 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 02 18:21:08 crc kubenswrapper[4832]: I1002 18:21:08.893097 4832 trace.go:236] Trace[1267901113]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Oct-2025 18:20:55.681) (total time: 13211ms): Oct 02 18:21:08 crc kubenswrapper[4832]: Trace[1267901113]: ---"Objects listed" error: 13210ms (18:21:08.892) Oct 02 18:21:08 crc kubenswrapper[4832]: Trace[1267901113]: [13.211126306s] [13.211126306s] END Oct 02 18:21:08 crc kubenswrapper[4832]: I1002 18:21:08.893161 4832 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.090144 4832 apiserver.go:52] "Watching apiserver" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.092669 4832 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.092874 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.093296 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.093342 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.093390 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.093460 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.093481 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.093525 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.093708 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.093916 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.094149 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.096409 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.096413 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.096489 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.096530 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.096616 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.096850 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.097515 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.097933 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.098159 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.123961 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.137018 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.138560 4832 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.148901 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.158408 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.169350 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.183876 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.192669 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.192736 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.192775 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.192807 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.192841 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.192874 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.192905 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.192945 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.192980 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193013 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193092 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193099 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193155 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193181 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193204 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193243 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193284 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193306 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193327 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193390 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193412 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193405 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193436 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193459 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193455 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193481 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193484 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193508 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193534 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193560 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193584 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193606 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193629 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193628 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193651 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193672 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193735 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193772 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193793 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193818 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193840 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193862 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193886 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193910 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193930 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193955 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193978 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.193998 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194022 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194047 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194057 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194077 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194103 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194106 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194131 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194135 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194142 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194157 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194182 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194205 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194215 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194229 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194370 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194413 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194450 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194480 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194404 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194421 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194506 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194556 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194616 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194645 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194668 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194693 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194717 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194740 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194766 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194790 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194815 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194840 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194862 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194886 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194909 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194933 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194960 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194983 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195030 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195054 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195083 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195106 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195129 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195151 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195176 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195198 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195224 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195252 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195297 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195325 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195351 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195375 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195400 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195424 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195450 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195475 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195498 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195522 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195544 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195573 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195596 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195618 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195644 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195666 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195689 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196094 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196135 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196162 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196192 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196217 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196241 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196277 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196300 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196326 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196355 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196381 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196404 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196432 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196456 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196479 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196503 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196527 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196555 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196578 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196609 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196636 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196661 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196685 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196714 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196739 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196765 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197261 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197311 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197336 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197363 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197388 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197415 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197439 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197466 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197491 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197521 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197546 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197571 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197594 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197617 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197640 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197666 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197689 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197716 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197741 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197766 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197790 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197823 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197849 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197876 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198023 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198052 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198076 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198096 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198114 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198129 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198146 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198163 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198179 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198247 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198294 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198344 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198373 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198400 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198423 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198448 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198474 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198502 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198529 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198553 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198579 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198604 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198627 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198648 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198670 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198693 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198720 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198743 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198770 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198794 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198819 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198841 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198864 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198893 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198918 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198943 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199202 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199320 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199352 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199380 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199409 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199434 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199458 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199482 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199508 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199532 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199608 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199636 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199668 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199700 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199726 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199758 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199787 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199814 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199843 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199874 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199902 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199929 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199957 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199985 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200081 4832 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200099 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200114 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200130 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200147 4832 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200162 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200176 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200191 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200207 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200222 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200236 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200249 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200282 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194556 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194661 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194782 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194879 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194897 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.194905 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195228 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195510 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195567 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195774 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195790 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195742 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195877 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195897 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.195986 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196098 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196170 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196289 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196319 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196388 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196318 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196465 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196501 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196688 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196790 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.196811 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197032 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197222 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197251 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197566 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.201566 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.201603 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.201594 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197655 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198509 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198507 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198512 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198677 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198810 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.198841 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199108 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.199640 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200199 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200254 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200332 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200346 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200647 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.201854 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200679 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200794 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200950 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.200985 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.201152 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.201179 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.201223 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.197621 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.202015 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.202537 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.202924 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.202949 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.202992 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.203159 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.203620 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.203956 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.204000 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.204003 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.204112 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.204431 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.204616 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.204654 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.204754 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.204789 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.204886 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.205177 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.205220 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.205579 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.205628 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.205706 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.205760 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.205783 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.205859 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.205968 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.206302 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.206301 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.206481 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.206550 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.206646 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.206666 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.206769 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.206845 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.206904 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.206966 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.207179 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.207246 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.207309 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.207367 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.207700 4832 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.207997 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.208095 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.208179 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.208317 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.208337 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.208367 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.208477 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.208540 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.208732 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.209090 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.209101 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.209408 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.209556 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.209350 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.209701 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.209878 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.210247 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.210394 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.210464 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.210487 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.211089 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.211385 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.211642 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.212039 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.212314 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.212635 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.212836 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.212882 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.212958 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.212973 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.213073 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:09.713011941 +0000 UTC m=+26.682454973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.213119 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.213414 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.213439 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:21:09.713402443 +0000 UTC m=+26.682845315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.213551 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.213592 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.213609 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.213678 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.213749 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:09.713720373 +0000 UTC m=+26.683163425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.213758 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.214087 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.214091 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.214414 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.214419 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.214489 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.214624 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.214961 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.215013 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.215306 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.215425 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.215457 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.215558 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.215561 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.215616 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.215627 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.215828 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.215905 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.216651 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.216734 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.217309 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.217557 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.217678 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.217941 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.218078 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.222399 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.223673 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.223827 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.223894 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.224046 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.225091 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.226568 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.226731 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.226848 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.226876 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.226893 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.226972 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:09.726946942 +0000 UTC m=+26.696389904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.227665 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.227698 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.227716 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.227787 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:09.727763148 +0000 UTC m=+26.697206190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.228485 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.229365 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.229613 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.229761 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.230597 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.231453 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.232984 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.233723 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.235125 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.235835 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.236186 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.236308 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.236818 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.236782 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.237086 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.237259 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.237413 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.237457 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.237669 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.237735 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.237318 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.240809 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.240842 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.241075 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.241073 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.241872 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.242642 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.245105 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.247050 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.249998 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.250312 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.251878 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.254307 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.256432 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.256855 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.258620 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.259872 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.260712 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.263149 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.264867 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.266111 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.268435 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.269410 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.270026 4832 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.270733 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.271774 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.272421 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.273155 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.274823 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.275641 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.277180 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.277839 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.279238 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.279730 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.282446 4832 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.282582 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.286235 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.286820 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.287817 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.290000 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.290728 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.291818 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.293011 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.294056 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.295018 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.295640 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.296650 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.297658 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.299359 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.300414 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301355 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301421 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301477 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301516 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301540 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301560 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301578 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301608 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301629 4832 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301646 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301663 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301681 4832 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301559 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301697 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301715 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301736 4832 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301755 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301771 4832 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301788 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301805 4832 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301823 4832 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301840 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301858 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301876 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301893 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.301909 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302116 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302140 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302160 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302183 4832 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302207 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302231 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302254 4832 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302300 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302318 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302324 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302336 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302443 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302466 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302487 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302508 4832 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302527 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302547 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302565 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302584 4832 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302635 4832 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302652 4832 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302670 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302687 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302704 4832 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302721 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302739 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302757 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302776 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302795 4832 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302813 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302830 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302848 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302865 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302882 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302901 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302918 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302936 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302953 4832 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302970 4832 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.302989 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303006 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303022 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303040 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303057 4832 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303074 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303091 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303108 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303126 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303143 4832 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303161 4832 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303185 4832 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303208 4832 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303228 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303248 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303290 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303307 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303325 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303345 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303362 4832 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303380 4832 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303400 4832 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303417 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303434 4832 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303452 4832 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303473 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303493 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303509 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303525 4832 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303542 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303559 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303576 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303594 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303745 4832 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303771 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303787 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303801 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303814 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303827 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303840 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303852 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303864 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303877 4832 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303889 4832 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303901 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303914 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303926 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303939 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303951 4832 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303953 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.303963 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304085 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304102 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304115 4832 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304129 4832 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304142 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304154 4832 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304167 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304179 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304191 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304204 4832 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304217 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304229 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304240 4832 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304253 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304285 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304298 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304311 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304323 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304335 4832 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304347 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304403 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304416 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304428 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304440 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304452 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304465 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304477 4832 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304488 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304500 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304512 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304524 4832 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304536 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304548 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304561 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304573 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304587 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304598 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304611 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304623 4832 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304637 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304648 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304660 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304672 4832 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304683 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304694 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304705 4832 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304717 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304732 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304762 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304779 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304795 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304811 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304823 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304834 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304848 4832 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304859 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304871 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304882 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304894 4832 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304905 4832 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304918 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304931 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304943 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304957 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304970 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304983 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.304996 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.305007 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.305019 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.305030 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.305042 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.305505 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.306054 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.307433 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.308729 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.309480 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.311176 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.398822 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.399717 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.401872 4832 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263" exitCode=255 Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.401987 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263"} Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.402099 4832 scope.go:117] "RemoveContainer" containerID="f9b6736f9bfc55bb1c26aec3a83fe6043fd2132ac492ee4bbe8941a6ad016c8b" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.409494 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.417976 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.424164 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.447671 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.448063 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.448646 4832 scope.go:117] "RemoveContainer" containerID="08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263" Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.448887 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.466539 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.482076 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.497630 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.509763 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.529116 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.809336 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.809544 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:21:10.809502167 +0000 UTC m=+27.778945089 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.809830 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.809862 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.809886 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.809911 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.809988 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.810020 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.810035 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.810048 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.810063 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:10.810044914 +0000 UTC m=+27.779487826 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.810085 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:10.810073575 +0000 UTC m=+27.779516457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.810119 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.810235 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:10.81020786 +0000 UTC m=+27.779650812 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.810137 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.810303 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.810317 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:09 crc kubenswrapper[4832]: E1002 18:21:09.810349 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:10.810339863 +0000 UTC m=+27.779782745 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:09 crc kubenswrapper[4832]: I1002 18:21:09.987030 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.001948 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9b6736f9bfc55bb1c26aec3a83fe6043fd2132ac492ee4bbe8941a6ad016c8b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:20:54Z\\\",\\\"message\\\":\\\"W1002 18:20:53.326744 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1002 18:20:53.328118 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759429253 cert, and key in /tmp/serving-cert-66007532/serving-signer.crt, /tmp/serving-cert-66007532/serving-signer.key\\\\nI1002 18:20:53.749625 1 observer_polling.go:159] Starting file observer\\\\nW1002 18:20:53.755847 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:53.756056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:53.805310 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-66007532/tls.crt::/tmp/serving-cert-66007532/tls.key\\\\\\\"\\\\nF1002 18:20:54.073830 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.018142 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.031156 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.045369 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.056240 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.070463 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.085707 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.222196 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.222465 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.406149 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.408348 4832 scope.go:117] "RemoveContainer" containerID="08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263" Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.408508 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.409501 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e"} Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.409542 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a"} Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.409554 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8562f51fd54620091d57a8afa5b962986d2deb6d1274c6ffee88a0ab094d0705"} Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.410786 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460"} Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.410821 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"584383db46c48b7780707b3687441c2d4ede0c58a7ac867eb83df6f768bbf394"} Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.411876 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"68d8d727ee131933d80c9a2c62de5f95145b53b3463e9537ef591416cee4245e"} Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.413378 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.426669 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.440641 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.457800 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.469833 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.482932 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.494806 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.510999 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.526910 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.539817 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.559619 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.580389 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.595256 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.614820 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.639064 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.817907 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.818010 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.818046 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.818356 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.818470 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.818530 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.818548 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.818833 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:21:12.818174833 +0000 UTC m=+29.787617715 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.818880 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:10 crc kubenswrapper[4832]: I1002 18:21:10.818921 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.818990 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.819039 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:12.819030411 +0000 UTC m=+29.788473283 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.819112 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.819127 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.819139 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.819164 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:12.819157945 +0000 UTC m=+29.788600927 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.819178 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:12.819172386 +0000 UTC m=+29.788615258 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:21:10 crc kubenswrapper[4832]: E1002 18:21:10.819191 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:12.819184896 +0000 UTC m=+29.788627768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:11 crc kubenswrapper[4832]: I1002 18:21:11.224679 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:11 crc kubenswrapper[4832]: E1002 18:21:11.224818 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:11 crc kubenswrapper[4832]: I1002 18:21:11.225256 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:11 crc kubenswrapper[4832]: E1002 18:21:11.225361 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:11 crc kubenswrapper[4832]: I1002 18:21:11.227339 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 02 18:21:11 crc kubenswrapper[4832]: I1002 18:21:11.228563 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 02 18:21:11 crc kubenswrapper[4832]: I1002 18:21:11.230011 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 02 18:21:11 crc kubenswrapper[4832]: I1002 18:21:11.230847 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 02 18:21:11 crc kubenswrapper[4832]: I1002 18:21:11.415212 4832 scope.go:117] "RemoveContainer" containerID="08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263" Oct 02 18:21:11 crc kubenswrapper[4832]: E1002 18:21:11.415452 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 02 18:21:11 crc kubenswrapper[4832]: I1002 18:21:11.899307 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.222622 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.222847 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.419808 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38"} Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.420568 4832 scope.go:117] "RemoveContainer" containerID="08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263" Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.420804 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.438204 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:12Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.457960 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:12Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.475450 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:12Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.494027 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:12Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.511694 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:12Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.531037 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:12Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.550691 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:12Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.838817 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:21:16.83877209 +0000 UTC m=+33.808214962 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.838816 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.838970 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.839007 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.839036 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:12 crc kubenswrapper[4832]: I1002 18:21:12.839064 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.839219 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.839251 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.839257 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.839290 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.839241 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.839343 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.839362 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.839370 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:16.839341649 +0000 UTC m=+33.808784561 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.839392 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:16.83938552 +0000 UTC m=+33.808828392 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.839432 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:16.839411811 +0000 UTC m=+33.808854683 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.839312 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:12 crc kubenswrapper[4832]: E1002 18:21:12.839480 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:16.839473152 +0000 UTC m=+33.808916024 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.222678 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.222797 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:13 crc kubenswrapper[4832]: E1002 18:21:13.222824 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:13 crc kubenswrapper[4832]: E1002 18:21:13.222986 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.427096 4832 scope.go:117] "RemoveContainer" containerID="08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263" Oct 02 18:21:13 crc kubenswrapper[4832]: E1002 18:21:13.427371 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.936865 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hc6sg"] Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.941114 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-zqjdg"] Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.942003 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.942341 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lhm4n"] Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.942937 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-fjjsq"] Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.943321 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.943418 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9sz9w"] Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.943670 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lhm4n" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.946995 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.947520 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fjjsq" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.948048 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.949631 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.949739 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.949917 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.949927 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.950311 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.950531 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.950858 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.951330 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.951409 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.951492 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.951735 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.952838 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.957442 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.957563 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.957766 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.957911 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.958161 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.958579 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.958819 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.958969 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.959249 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.965406 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:13Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.982562 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:13Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:13 crc kubenswrapper[4832]: I1002 18:21:13.995032 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:13Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.009130 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.027247 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.039089 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050098 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-run-ovn-kubernetes\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050131 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-etc-kubernetes\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050157 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e93ac374-cf01-41ab-a628-5c2cb5de7437-rootfs\") pod \"machine-config-daemon-hc6sg\" (UID: \"e93ac374-cf01-41ab-a628-5c2cb5de7437\") " pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050202 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec0c328b-b145-450a-aace-06bb839a1a02-os-release\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050222 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-ovn\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050241 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e93ac374-cf01-41ab-a628-5c2cb5de7437-mcd-auth-proxy-config\") pod \"machine-config-daemon-hc6sg\" (UID: \"e93ac374-cf01-41ab-a628-5c2cb5de7437\") " pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050279 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-hostroot\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050299 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec0c328b-b145-450a-aace-06bb839a1a02-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050317 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-system-cni-dir\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050336 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e93ac374-cf01-41ab-a628-5c2cb5de7437-proxy-tls\") pod \"machine-config-daemon-hc6sg\" (UID: \"e93ac374-cf01-41ab-a628-5c2cb5de7437\") " pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050352 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050372 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rdj6\" (UniqueName: \"kubernetes.io/projected/28e6c98b-e4b6-4027-8cf5-655985e80fac-kube-api-access-2rdj6\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050391 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc7fg\" (UniqueName: \"kubernetes.io/projected/e93ac374-cf01-41ab-a628-5c2cb5de7437-kube-api-access-zc7fg\") pod \"machine-config-daemon-hc6sg\" (UID: \"e93ac374-cf01-41ab-a628-5c2cb5de7437\") " pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050407 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec0c328b-b145-450a-aace-06bb839a1a02-system-cni-dir\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050427 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec0c328b-b145-450a-aace-06bb839a1a02-cnibin\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050445 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec0c328b-b145-450a-aace-06bb839a1a02-cni-binary-copy\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050511 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-var-lib-openvswitch\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050564 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4d4t\" (UniqueName: \"kubernetes.io/projected/ec0c328b-b145-450a-aace-06bb839a1a02-kube-api-access-m4d4t\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050611 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec0c328b-b145-450a-aace-06bb839a1a02-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050633 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovn-node-metrics-cert\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050657 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-os-release\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050681 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf9fg\" (UniqueName: \"kubernetes.io/projected/a08746c9-6dd1-4414-a681-c8a254264429-kube-api-access-wf9fg\") pod \"node-resolver-fjjsq\" (UID: \"a08746c9-6dd1-4414-a681-c8a254264429\") " pod="openshift-dns/node-resolver-fjjsq" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050705 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-slash\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050726 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-var-lib-cni-bin\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050747 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-kubelet\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050768 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovnkube-config\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050791 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-var-lib-cni-multus\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050867 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-multus-conf-dir\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050912 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-systemd-units\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050936 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-etc-openvswitch\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050959 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-env-overrides\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.050980 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7319e265-17de-4801-8ab7-7671dba7489d-multus-daemon-config\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051066 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-multus-socket-dir-parent\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051107 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-run-k8s-cni-cncf-io\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051125 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-run-multus-certs\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051145 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7qhh\" (UniqueName: \"kubernetes.io/projected/7319e265-17de-4801-8ab7-7671dba7489d-kube-api-access-t7qhh\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051179 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-cni-bin\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051201 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-cni-netd\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051221 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-cnibin\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051240 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7319e265-17de-4801-8ab7-7671dba7489d-cni-binary-copy\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051285 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-openvswitch\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051307 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-log-socket\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051326 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-run-netns\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051344 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-var-lib-kubelet\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051365 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovnkube-script-lib\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051386 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-run-netns\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051409 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-node-log\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051429 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a08746c9-6dd1-4414-a681-c8a254264429-hosts-file\") pod \"node-resolver-fjjsq\" (UID: \"a08746c9-6dd1-4414-a681-c8a254264429\") " pod="openshift-dns/node-resolver-fjjsq" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051451 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-systemd\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.051481 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-multus-cni-dir\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.054638 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.066654 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.079470 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.095296 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.117973 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.144762 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.152468 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-run-ovn-kubernetes\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.152819 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-etc-kubernetes\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.152982 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e93ac374-cf01-41ab-a628-5c2cb5de7437-rootfs\") pod \"machine-config-daemon-hc6sg\" (UID: \"e93ac374-cf01-41ab-a628-5c2cb5de7437\") " pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153068 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e93ac374-cf01-41ab-a628-5c2cb5de7437-rootfs\") pod \"machine-config-daemon-hc6sg\" (UID: \"e93ac374-cf01-41ab-a628-5c2cb5de7437\") " pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.152606 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-run-ovn-kubernetes\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.152909 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-etc-kubernetes\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153099 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-ovn\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153215 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e93ac374-cf01-41ab-a628-5c2cb5de7437-mcd-auth-proxy-config\") pod \"machine-config-daemon-hc6sg\" (UID: \"e93ac374-cf01-41ab-a628-5c2cb5de7437\") " pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153242 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec0c328b-b145-450a-aace-06bb839a1a02-os-release\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153274 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-hostroot\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153301 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-system-cni-dir\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153318 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e93ac374-cf01-41ab-a628-5c2cb5de7437-proxy-tls\") pod \"machine-config-daemon-hc6sg\" (UID: \"e93ac374-cf01-41ab-a628-5c2cb5de7437\") " pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153340 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec0c328b-b145-450a-aace-06bb839a1a02-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153357 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec0c328b-b145-450a-aace-06bb839a1a02-cni-binary-copy\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153374 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-var-lib-openvswitch\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153403 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153420 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rdj6\" (UniqueName: \"kubernetes.io/projected/28e6c98b-e4b6-4027-8cf5-655985e80fac-kube-api-access-2rdj6\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153434 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc7fg\" (UniqueName: \"kubernetes.io/projected/e93ac374-cf01-41ab-a628-5c2cb5de7437-kube-api-access-zc7fg\") pod \"machine-config-daemon-hc6sg\" (UID: \"e93ac374-cf01-41ab-a628-5c2cb5de7437\") " pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153454 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec0c328b-b145-450a-aace-06bb839a1a02-system-cni-dir\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153471 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec0c328b-b145-450a-aace-06bb839a1a02-cnibin\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153512 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec0c328b-b145-450a-aace-06bb839a1a02-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153513 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-var-lib-openvswitch\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153532 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4d4t\" (UniqueName: \"kubernetes.io/projected/ec0c328b-b145-450a-aace-06bb839a1a02-kube-api-access-m4d4t\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153617 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf9fg\" (UniqueName: \"kubernetes.io/projected/a08746c9-6dd1-4414-a681-c8a254264429-kube-api-access-wf9fg\") pod \"node-resolver-fjjsq\" (UID: \"a08746c9-6dd1-4414-a681-c8a254264429\") " pod="openshift-dns/node-resolver-fjjsq" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153642 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovn-node-metrics-cert\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153662 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-os-release\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153681 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-var-lib-cni-bin\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153699 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-slash\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153731 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-kubelet\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153751 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovnkube-config\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153767 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-var-lib-cni-multus\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153785 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-multus-conf-dir\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153789 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ec0c328b-b145-450a-aace-06bb839a1a02-os-release\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-systemd-units\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153832 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-etc-openvswitch\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153849 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-env-overrides\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153856 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-kubelet\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153867 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7319e265-17de-4801-8ab7-7671dba7489d-multus-daemon-config\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153956 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-run-multus-certs\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153976 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7qhh\" (UniqueName: \"kubernetes.io/projected/7319e265-17de-4801-8ab7-7671dba7489d-kube-api-access-t7qhh\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153995 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-multus-socket-dir-parent\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154020 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-run-k8s-cni-cncf-io\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154046 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-openvswitch\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154062 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-log-socket\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154082 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-cni-bin\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154083 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ec0c328b-b145-450a-aace-06bb839a1a02-cnibin\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154098 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-cni-netd\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154114 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-cnibin\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154130 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7319e265-17de-4801-8ab7-7671dba7489d-cni-binary-copy\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154174 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-run-netns\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154185 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e93ac374-cf01-41ab-a628-5c2cb5de7437-mcd-auth-proxy-config\") pod \"machine-config-daemon-hc6sg\" (UID: \"e93ac374-cf01-41ab-a628-5c2cb5de7437\") " pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154191 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-var-lib-kubelet\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154279 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-run-netns\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154316 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-node-log\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154341 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovnkube-script-lib\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154351 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ec0c328b-b145-450a-aace-06bb839a1a02-cni-binary-copy\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154366 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-systemd\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154422 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-systemd\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154218 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-var-lib-kubelet\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154437 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-multus-cni-dir\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154471 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a08746c9-6dd1-4414-a681-c8a254264429-hosts-file\") pod \"node-resolver-fjjsq\" (UID: \"a08746c9-6dd1-4414-a681-c8a254264429\") " pod="openshift-dns/node-resolver-fjjsq" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154493 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-multus-cni-dir\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.153749 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154062 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ec0c328b-b145-450a-aace-06bb839a1a02-system-cni-dir\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154624 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ec0c328b-b145-450a-aace-06bb839a1a02-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154615 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ec0c328b-b145-450a-aace-06bb839a1a02-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154657 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-system-cni-dir\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154696 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-hostroot\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154708 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-run-netns\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154735 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-multus-socket-dir-parent\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154740 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-var-lib-cni-bin\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154758 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-var-lib-cni-multus\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154798 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-multus-conf-dir\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154818 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-systemd-units\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154839 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-etc-openvswitch\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154903 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovnkube-config\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154939 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-run-multus-certs\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.154957 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-node-log\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.155244 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-env-overrides\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.155350 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-cni-bin\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.155357 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovnkube-script-lib\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.155380 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-cni-netd\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.155396 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-slash\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.155411 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-host-run-k8s-cni-cncf-io\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.155432 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a08746c9-6dd1-4414-a681-c8a254264429-hosts-file\") pod \"node-resolver-fjjsq\" (UID: \"a08746c9-6dd1-4414-a681-c8a254264429\") " pod="openshift-dns/node-resolver-fjjsq" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.155448 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-log-socket\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.155475 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-os-release\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.155467 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7319e265-17de-4801-8ab7-7671dba7489d-multus-daemon-config\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.155502 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-openvswitch\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.155527 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7319e265-17de-4801-8ab7-7671dba7489d-cnibin\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.155529 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-run-netns\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.155977 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7319e265-17de-4801-8ab7-7671dba7489d-cni-binary-copy\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.156073 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-ovn\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.160797 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e93ac374-cf01-41ab-a628-5c2cb5de7437-proxy-tls\") pod \"machine-config-daemon-hc6sg\" (UID: \"e93ac374-cf01-41ab-a628-5c2cb5de7437\") " pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.163313 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovn-node-metrics-cert\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.179671 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rdj6\" (UniqueName: \"kubernetes.io/projected/28e6c98b-e4b6-4027-8cf5-655985e80fac-kube-api-access-2rdj6\") pod \"ovnkube-node-9sz9w\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.180361 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc7fg\" (UniqueName: \"kubernetes.io/projected/e93ac374-cf01-41ab-a628-5c2cb5de7437-kube-api-access-zc7fg\") pod \"machine-config-daemon-hc6sg\" (UID: \"e93ac374-cf01-41ab-a628-5c2cb5de7437\") " pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.180712 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf9fg\" (UniqueName: \"kubernetes.io/projected/a08746c9-6dd1-4414-a681-c8a254264429-kube-api-access-wf9fg\") pod \"node-resolver-fjjsq\" (UID: \"a08746c9-6dd1-4414-a681-c8a254264429\") " pod="openshift-dns/node-resolver-fjjsq" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.181979 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4d4t\" (UniqueName: \"kubernetes.io/projected/ec0c328b-b145-450a-aace-06bb839a1a02-kube-api-access-m4d4t\") pod \"multus-additional-cni-plugins-zqjdg\" (UID: \"ec0c328b-b145-450a-aace-06bb839a1a02\") " pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.182518 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7qhh\" (UniqueName: \"kubernetes.io/projected/7319e265-17de-4801-8ab7-7671dba7489d-kube-api-access-t7qhh\") pod \"multus-lhm4n\" (UID: \"7319e265-17de-4801-8ab7-7671dba7489d\") " pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.195376 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.208995 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.222440 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:14 crc kubenswrapper[4832]: E1002 18:21:14.222573 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.225452 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.238191 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.249651 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.260007 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.263174 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.268556 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.276142 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lhm4n" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.283863 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fjjsq" Oct 02 18:21:14 crc kubenswrapper[4832]: W1002 18:21:14.287413 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode93ac374_cf01_41ab_a628_5c2cb5de7437.slice/crio-32f8c54921a3bfe7a21d17e265139c4a3071ebe4fb478d1c5abc0ac9c1dba392 WatchSource:0}: Error finding container 32f8c54921a3bfe7a21d17e265139c4a3071ebe4fb478d1c5abc0ac9c1dba392: Status 404 returned error can't find the container with id 32f8c54921a3bfe7a21d17e265139c4a3071ebe4fb478d1c5abc0ac9c1dba392 Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.289689 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.292725 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.316860 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: W1002 18:21:14.325359 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28e6c98b_e4b6_4027_8cf5_655985e80fac.slice/crio-ec348c0fa8c5534d4d1134e24543066d4d3a02f5504730734a0a0c14d24d4e3c WatchSource:0}: Error finding container ec348c0fa8c5534d4d1134e24543066d4d3a02f5504730734a0a0c14d24d4e3c: Status 404 returned error can't find the container with id ec348c0fa8c5534d4d1134e24543066d4d3a02f5504730734a0a0c14d24d4e3c Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.430979 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" event={"ID":"ec0c328b-b145-450a-aace-06bb839a1a02","Type":"ContainerStarted","Data":"f1075587275f5b27af3fd31f9483703e7d89317f1bd387201c30cbaa9491d689"} Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.433809 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"32f8c54921a3bfe7a21d17e265139c4a3071ebe4fb478d1c5abc0ac9c1dba392"} Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.436295 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerStarted","Data":"ec348c0fa8c5534d4d1134e24543066d4d3a02f5504730734a0a0c14d24d4e3c"} Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.437209 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhm4n" event={"ID":"7319e265-17de-4801-8ab7-7671dba7489d","Type":"ContainerStarted","Data":"1037d5138816ae5de2001abec61f0462645e5b169a7232751661a3a34d587b1d"} Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.438347 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fjjsq" event={"ID":"a08746c9-6dd1-4414-a681-c8a254264429","Type":"ContainerStarted","Data":"2c50c95a2feb53db847bbf47e348236a14697d59ba4dd935c5076208b4902fc1"} Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.865236 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.871362 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.874156 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.882694 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.896505 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.911112 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.924847 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.940624 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.964322 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.980198 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:14 crc kubenswrapper[4832]: I1002 18:21:14.992985 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:14Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.007387 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.020527 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.036979 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.051896 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.065641 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.080940 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.093154 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.106658 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.121819 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.135833 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.157789 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.170110 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.192125 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.205823 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.219877 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.221958 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.222071 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:15 crc kubenswrapper[4832]: E1002 18:21:15.222206 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:15 crc kubenswrapper[4832]: E1002 18:21:15.222396 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.234007 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.247096 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.261734 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.274952 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.288349 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.301637 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.312136 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.330099 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.349139 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.363020 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.378859 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.396629 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.407915 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.432535 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.443201 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727"} Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.443287 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c"} Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.444791 4832 generic.go:334] "Generic (PLEG): container finished" podID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerID="2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611" exitCode=0 Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.444875 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerDied","Data":"2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611"} Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.446395 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhm4n" event={"ID":"7319e265-17de-4801-8ab7-7671dba7489d","Type":"ContainerStarted","Data":"897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db"} Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.449165 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fjjsq" event={"ID":"a08746c9-6dd1-4414-a681-c8a254264429","Type":"ContainerStarted","Data":"7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc"} Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.449558 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.459289 4832 generic.go:334] "Generic (PLEG): container finished" podID="ec0c328b-b145-450a-aace-06bb839a1a02" containerID="5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904" exitCode=0 Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.459556 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" event={"ID":"ec0c328b-b145-450a-aace-06bb839a1a02","Type":"ContainerDied","Data":"5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904"} Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.463908 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.480333 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.494665 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.508895 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.539100 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.606802 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.619771 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.633786 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.649436 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.670800 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.682815 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.700371 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.715739 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.731141 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.745930 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.758090 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.770680 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.786809 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.799001 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.812670 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.827768 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.841050 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.853197 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.865867 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.879671 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.889201 4832 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.892070 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.892106 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.892149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.892308 4832 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.900067 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.905159 4832 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.905435 4832 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.906402 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.906439 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.906452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.906465 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.906485 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:15Z","lastTransitionTime":"2025-10-02T18:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:15 crc kubenswrapper[4832]: E1002 18:21:15.924584 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.929285 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.929344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.929362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.929398 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.929419 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:15Z","lastTransitionTime":"2025-10-02T18:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:15 crc kubenswrapper[4832]: E1002 18:21:15.946773 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.950694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.950749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.950765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.950789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.950803 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:15Z","lastTransitionTime":"2025-10-02T18:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:15 crc kubenswrapper[4832]: E1002 18:21:15.967104 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.974926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.974970 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.974986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.975005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.975023 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:15Z","lastTransitionTime":"2025-10-02T18:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:15 crc kubenswrapper[4832]: E1002 18:21:15.991888 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.996136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.996175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.996189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.996208 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:15 crc kubenswrapper[4832]: I1002 18:21:15.996221 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:15Z","lastTransitionTime":"2025-10-02T18:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.009155 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.009280 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.011255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.011340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.011353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.011369 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.011380 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:16Z","lastTransitionTime":"2025-10-02T18:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.113482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.113522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.113535 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.113551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.113562 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:16Z","lastTransitionTime":"2025-10-02T18:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.209205 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qdd5f"] Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.210024 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qdd5f" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.212122 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.212813 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.212901 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.212991 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.216647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.216719 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.216745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.216776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.216798 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:16Z","lastTransitionTime":"2025-10-02T18:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.222241 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.222607 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.225022 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.259912 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.270240 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.285003 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.297448 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.311663 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.319083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.319133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.319144 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.319168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.319180 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:16Z","lastTransitionTime":"2025-10-02T18:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.325171 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.336531 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.354794 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.378376 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1636af9-732e-45d1-bb4f-2525340a0ac0-host\") pod \"node-ca-qdd5f\" (UID: \"c1636af9-732e-45d1-bb4f-2525340a0ac0\") " pod="openshift-image-registry/node-ca-qdd5f" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.378462 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c1636af9-732e-45d1-bb4f-2525340a0ac0-serviceca\") pod \"node-ca-qdd5f\" (UID: \"c1636af9-732e-45d1-bb4f-2525340a0ac0\") " pod="openshift-image-registry/node-ca-qdd5f" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.378485 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwkn9\" (UniqueName: \"kubernetes.io/projected/c1636af9-732e-45d1-bb4f-2525340a0ac0-kube-api-access-xwkn9\") pod \"node-ca-qdd5f\" (UID: \"c1636af9-732e-45d1-bb4f-2525340a0ac0\") " pod="openshift-image-registry/node-ca-qdd5f" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.393525 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.422212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.422318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.422343 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.422376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.422443 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:16Z","lastTransitionTime":"2025-10-02T18:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.432338 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.474161 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.474524 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerStarted","Data":"e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.475481 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerStarted","Data":"35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.475562 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerStarted","Data":"04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.475620 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerStarted","Data":"a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.478465 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" event={"ID":"ec0c328b-b145-450a-aace-06bb839a1a02","Type":"ContainerStarted","Data":"bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.479495 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c1636af9-732e-45d1-bb4f-2525340a0ac0-serviceca\") pod \"node-ca-qdd5f\" (UID: \"c1636af9-732e-45d1-bb4f-2525340a0ac0\") " pod="openshift-image-registry/node-ca-qdd5f" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.479600 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwkn9\" (UniqueName: \"kubernetes.io/projected/c1636af9-732e-45d1-bb4f-2525340a0ac0-kube-api-access-xwkn9\") pod \"node-ca-qdd5f\" (UID: \"c1636af9-732e-45d1-bb4f-2525340a0ac0\") " pod="openshift-image-registry/node-ca-qdd5f" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.479675 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1636af9-732e-45d1-bb4f-2525340a0ac0-host\") pod \"node-ca-qdd5f\" (UID: \"c1636af9-732e-45d1-bb4f-2525340a0ac0\") " pod="openshift-image-registry/node-ca-qdd5f" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.479814 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c1636af9-732e-45d1-bb4f-2525340a0ac0-host\") pod \"node-ca-qdd5f\" (UID: \"c1636af9-732e-45d1-bb4f-2525340a0ac0\") " pod="openshift-image-registry/node-ca-qdd5f" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.481238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c1636af9-732e-45d1-bb4f-2525340a0ac0-serviceca\") pod \"node-ca-qdd5f\" (UID: \"c1636af9-732e-45d1-bb4f-2525340a0ac0\") " pod="openshift-image-registry/node-ca-qdd5f" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.521745 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwkn9\" (UniqueName: \"kubernetes.io/projected/c1636af9-732e-45d1-bb4f-2525340a0ac0-kube-api-access-xwkn9\") pod \"node-ca-qdd5f\" (UID: \"c1636af9-732e-45d1-bb4f-2525340a0ac0\") " pod="openshift-image-registry/node-ca-qdd5f" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.525005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.525030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.525038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.525051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.525059 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:16Z","lastTransitionTime":"2025-10-02T18:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.538391 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.557949 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qdd5f" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.570391 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: W1002 18:21:16.571191 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1636af9_732e_45d1_bb4f_2525340a0ac0.slice/crio-7ce9141280f55b8824a683c535f6c042cc6949479752277d62ec83cab0319328 WatchSource:0}: Error finding container 7ce9141280f55b8824a683c535f6c042cc6949479752277d62ec83cab0319328: Status 404 returned error can't find the container with id 7ce9141280f55b8824a683c535f6c042cc6949479752277d62ec83cab0319328 Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.614238 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.628072 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.628115 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.628130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.628146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.628155 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:16Z","lastTransitionTime":"2025-10-02T18:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.654370 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.693589 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.734766 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.735307 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.735349 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.735357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.735372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.735382 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:16Z","lastTransitionTime":"2025-10-02T18:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.775275 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.816936 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.838400 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.838428 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.838436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.838450 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.838460 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:16Z","lastTransitionTime":"2025-10-02T18:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.858467 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.889207 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.889359 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.889393 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.889422 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.889445 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.889473 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:21:24.889442244 +0000 UTC m=+41.858885116 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.889603 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.889613 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.889623 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.889629 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.889596 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.889735 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:24.889713743 +0000 UTC m=+41.859156615 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.889787 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:24.889769954 +0000 UTC m=+41.859212826 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.889633 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.889816 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.889849 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:24.889843057 +0000 UTC m=+41.859285929 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.889638 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:16 crc kubenswrapper[4832]: E1002 18:21:16.889902 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:24.889894098 +0000 UTC m=+41.859336970 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.892522 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.934845 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.940805 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.940870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.940889 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.940910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.940926 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:16Z","lastTransitionTime":"2025-10-02T18:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:16 crc kubenswrapper[4832]: I1002 18:21:16.974344 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:16Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.017276 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.043414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.043461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.043473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.043493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.043505 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:17Z","lastTransitionTime":"2025-10-02T18:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.057691 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.097560 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.133700 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.146959 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.147000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.147014 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.147031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.147043 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:17Z","lastTransitionTime":"2025-10-02T18:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.221745 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.221891 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:17 crc kubenswrapper[4832]: E1002 18:21:17.221992 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:17 crc kubenswrapper[4832]: E1002 18:21:17.222094 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.249387 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.249429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.249440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.249459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.249471 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:17Z","lastTransitionTime":"2025-10-02T18:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.352415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.352663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.352773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.352860 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.352948 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:17Z","lastTransitionTime":"2025-10-02T18:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.455374 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.455415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.455430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.455457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.455484 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:17Z","lastTransitionTime":"2025-10-02T18:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.484439 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qdd5f" event={"ID":"c1636af9-732e-45d1-bb4f-2525340a0ac0","Type":"ContainerStarted","Data":"f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.484524 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qdd5f" event={"ID":"c1636af9-732e-45d1-bb4f-2525340a0ac0","Type":"ContainerStarted","Data":"7ce9141280f55b8824a683c535f6c042cc6949479752277d62ec83cab0319328"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.490371 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerStarted","Data":"87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.490437 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerStarted","Data":"a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.493372 4832 generic.go:334] "Generic (PLEG): container finished" podID="ec0c328b-b145-450a-aace-06bb839a1a02" containerID="bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83" exitCode=0 Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.493425 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" event={"ID":"ec0c328b-b145-450a-aace-06bb839a1a02","Type":"ContainerDied","Data":"bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.505582 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.526952 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.547286 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.566228 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.566351 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.566366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.566415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.566432 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:17Z","lastTransitionTime":"2025-10-02T18:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.571499 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.587684 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.606687 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.634537 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.650950 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.670175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.670219 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.671039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.671071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.670977 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.671084 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:17Z","lastTransitionTime":"2025-10-02T18:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.686119 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.700573 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.713815 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.728671 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.743291 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.757526 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.774025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.774077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.774092 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.774112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.774123 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:17Z","lastTransitionTime":"2025-10-02T18:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.775348 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.813223 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.855511 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.877083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.877134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.877147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.877169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.877183 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:17Z","lastTransitionTime":"2025-10-02T18:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.894783 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.939753 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.980101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.980146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.980159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.980179 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.980192 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:17Z","lastTransitionTime":"2025-10-02T18:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:17 crc kubenswrapper[4832]: I1002 18:21:17.984366 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.016409 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.060747 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.083817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.083861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.083872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.083890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.083911 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:18Z","lastTransitionTime":"2025-10-02T18:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.094646 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.139088 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.178984 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.187865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.187909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.187922 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.187943 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.187960 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:18Z","lastTransitionTime":"2025-10-02T18:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.213228 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.222403 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:18 crc kubenswrapper[4832]: E1002 18:21:18.222519 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.254922 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.290858 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.290902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.290913 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.290933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.290944 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:18Z","lastTransitionTime":"2025-10-02T18:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.394387 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.394447 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.394463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.394486 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.394501 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:18Z","lastTransitionTime":"2025-10-02T18:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.497784 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.498708 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.498899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.499054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.499246 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:18Z","lastTransitionTime":"2025-10-02T18:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.501073 4832 generic.go:334] "Generic (PLEG): container finished" podID="ec0c328b-b145-450a-aace-06bb839a1a02" containerID="f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769" exitCode=0 Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.501241 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" event={"ID":"ec0c328b-b145-450a-aace-06bb839a1a02","Type":"ContainerDied","Data":"f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769"} Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.523308 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.552541 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.567032 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.589571 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.603671 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.603705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.603715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.603732 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.603744 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:18Z","lastTransitionTime":"2025-10-02T18:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.604518 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.630114 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.645791 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.659591 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.676251 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.689628 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.702876 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.705399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.705420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.705429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.705442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.705451 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:18Z","lastTransitionTime":"2025-10-02T18:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.737436 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.778972 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.808218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.808289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.808306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.808328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.808340 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:18Z","lastTransitionTime":"2025-10-02T18:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.813633 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:18Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.910429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.910483 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.910495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.910518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:18 crc kubenswrapper[4832]: I1002 18:21:18.910531 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:18Z","lastTransitionTime":"2025-10-02T18:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.012916 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.012955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.012965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.012981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.012991 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:19Z","lastTransitionTime":"2025-10-02T18:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.115330 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.115378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.115390 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.115408 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.115420 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:19Z","lastTransitionTime":"2025-10-02T18:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.218759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.218821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.218839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.218867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.218885 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:19Z","lastTransitionTime":"2025-10-02T18:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.222175 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.222239 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:19 crc kubenswrapper[4832]: E1002 18:21:19.222407 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:19 crc kubenswrapper[4832]: E1002 18:21:19.222662 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.327082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.327239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.327316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.327348 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.327367 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:19Z","lastTransitionTime":"2025-10-02T18:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.430069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.430162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.430190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.430225 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.430248 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:19Z","lastTransitionTime":"2025-10-02T18:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.509581 4832 generic.go:334] "Generic (PLEG): container finished" podID="ec0c328b-b145-450a-aace-06bb839a1a02" containerID="9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8" exitCode=0 Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.509661 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" event={"ID":"ec0c328b-b145-450a-aace-06bb839a1a02","Type":"ContainerDied","Data":"9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8"} Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.518467 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerStarted","Data":"edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339"} Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.533532 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.533585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.533603 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.533628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.533648 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:19Z","lastTransitionTime":"2025-10-02T18:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.534696 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:19Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.556488 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:19Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.574386 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:19Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.589986 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:19Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.603122 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:19Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.623689 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:19Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.636194 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.636234 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.636248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.636283 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.636298 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:19Z","lastTransitionTime":"2025-10-02T18:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.649146 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:19Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.660855 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:19Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.681377 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:19Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.695475 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:19Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.709243 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:19Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.722718 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:19Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.738542 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.738595 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.738606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.738619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.738628 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:19Z","lastTransitionTime":"2025-10-02T18:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.741594 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:19Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.757160 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:19Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.841909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.841970 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.841989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.842017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.842037 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:19Z","lastTransitionTime":"2025-10-02T18:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.944969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.945023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.945042 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.945070 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:19 crc kubenswrapper[4832]: I1002 18:21:19.945089 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:19Z","lastTransitionTime":"2025-10-02T18:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.048253 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.048400 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.048430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.048461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.048484 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:20Z","lastTransitionTime":"2025-10-02T18:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.152543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.152607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.152627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.152651 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.152669 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:20Z","lastTransitionTime":"2025-10-02T18:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.222346 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:20 crc kubenswrapper[4832]: E1002 18:21:20.222572 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.255876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.255915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.255925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.255943 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.255957 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:20Z","lastTransitionTime":"2025-10-02T18:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.358811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.358856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.358867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.358885 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.358898 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:20Z","lastTransitionTime":"2025-10-02T18:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.461522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.461593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.461619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.461655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.461676 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:20Z","lastTransitionTime":"2025-10-02T18:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.528236 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" event={"ID":"ec0c328b-b145-450a-aace-06bb839a1a02","Type":"ContainerStarted","Data":"1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5"} Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.565500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.565539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.565551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.565570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.565581 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:20Z","lastTransitionTime":"2025-10-02T18:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.668482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.668561 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.668587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.668617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.668641 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:20Z","lastTransitionTime":"2025-10-02T18:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.772247 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.772355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.772375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.772401 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.772419 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:20Z","lastTransitionTime":"2025-10-02T18:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.875332 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.875395 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.875419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.875454 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.875478 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:20Z","lastTransitionTime":"2025-10-02T18:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.978601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.978910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.979064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.979235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:20 crc kubenswrapper[4832]: I1002 18:21:20.979480 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:20Z","lastTransitionTime":"2025-10-02T18:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.083175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.083600 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.083610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.083626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.083637 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:21Z","lastTransitionTime":"2025-10-02T18:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.186783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.186828 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.186844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.186867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.186882 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:21Z","lastTransitionTime":"2025-10-02T18:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.222296 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.222475 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:21 crc kubenswrapper[4832]: E1002 18:21:21.222531 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:21 crc kubenswrapper[4832]: E1002 18:21:21.222708 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.289980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.290156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.290218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.290316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.290406 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:21Z","lastTransitionTime":"2025-10-02T18:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.392925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.393203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.393415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.393567 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.393717 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:21Z","lastTransitionTime":"2025-10-02T18:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.496409 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.496483 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.496506 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.496538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.496560 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:21Z","lastTransitionTime":"2025-10-02T18:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.538431 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerStarted","Data":"dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf"} Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.538822 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.538957 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.559025 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.575937 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.598211 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.600086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.600203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.600331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.600436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.600691 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:21Z","lastTransitionTime":"2025-10-02T18:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.612018 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.612424 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.613385 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.630245 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.652720 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.669924 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.686054 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.703705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.703860 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.703923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.703994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.704055 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:21Z","lastTransitionTime":"2025-10-02T18:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.712799 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.730497 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.747007 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.765506 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.780743 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.795648 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.807393 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.807440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.807449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.807463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.807473 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:21Z","lastTransitionTime":"2025-10-02T18:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.808673 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.829248 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.844822 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.864166 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.881603 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.905078 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.909924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.909973 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.909986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.910003 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.910018 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:21Z","lastTransitionTime":"2025-10-02T18:21:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.923592 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.939043 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.955671 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.968754 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:21 crc kubenswrapper[4832]: I1002 18:21:21.992888 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:21Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.009303 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.012953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.012986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.012997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.013014 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.013024 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:22Z","lastTransitionTime":"2025-10-02T18:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.023163 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.036566 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.115461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.115514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.115525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.115542 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.115553 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:22Z","lastTransitionTime":"2025-10-02T18:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.219411 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.219451 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.219461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.219476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.219489 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:22Z","lastTransitionTime":"2025-10-02T18:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.222824 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:22 crc kubenswrapper[4832]: E1002 18:21:22.223015 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.322130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.322178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.322191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.322210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.322223 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:22Z","lastTransitionTime":"2025-10-02T18:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.425519 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.425589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.425607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.425632 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.425649 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:22Z","lastTransitionTime":"2025-10-02T18:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.528467 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.528543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.528564 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.528589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.528607 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:22Z","lastTransitionTime":"2025-10-02T18:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.546067 4832 generic.go:334] "Generic (PLEG): container finished" podID="ec0c328b-b145-450a-aace-06bb839a1a02" containerID="1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5" exitCode=0 Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.546170 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" event={"ID":"ec0c328b-b145-450a-aace-06bb839a1a02","Type":"ContainerDied","Data":"1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5"} Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.546250 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.577892 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.592798 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.610000 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.628534 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.633779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.633813 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.633825 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.633841 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.633850 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:22Z","lastTransitionTime":"2025-10-02T18:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.645213 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.660918 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.675818 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.689647 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.709633 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.720134 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.736319 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.736363 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.736372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.736390 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.736399 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:22Z","lastTransitionTime":"2025-10-02T18:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.739448 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.751429 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.767908 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.781026 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:22Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.840501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.840892 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.840905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.840924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.840936 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:22Z","lastTransitionTime":"2025-10-02T18:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.943677 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.943717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.943744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.943759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:22 crc kubenswrapper[4832]: I1002 18:21:22.943769 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:22Z","lastTransitionTime":"2025-10-02T18:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.046946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.047040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.047059 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.047087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.047104 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:23Z","lastTransitionTime":"2025-10-02T18:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.149799 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.149837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.149848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.149868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.149879 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:23Z","lastTransitionTime":"2025-10-02T18:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.222609 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.222626 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:23 crc kubenswrapper[4832]: E1002 18:21:23.222798 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:23 crc kubenswrapper[4832]: E1002 18:21:23.222876 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.252503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.252604 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.252629 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.252659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.252682 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:23Z","lastTransitionTime":"2025-10-02T18:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.355148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.355200 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.355211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.355225 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.355234 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:23Z","lastTransitionTime":"2025-10-02T18:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.458590 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.458642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.458655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.458674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.458686 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:23Z","lastTransitionTime":"2025-10-02T18:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.555565 4832 generic.go:334] "Generic (PLEG): container finished" podID="ec0c328b-b145-450a-aace-06bb839a1a02" containerID="1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a" exitCode=0 Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.555677 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" event={"ID":"ec0c328b-b145-450a-aace-06bb839a1a02","Type":"ContainerDied","Data":"1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a"} Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.555720 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.561007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.561048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.561062 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.561079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.561093 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:23Z","lastTransitionTime":"2025-10-02T18:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.574060 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:23Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.591756 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:23Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.623447 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:23Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.638422 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:23Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.654389 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:23Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.663231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.663290 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.663302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.663318 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.663329 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:23Z","lastTransitionTime":"2025-10-02T18:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.672951 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:23Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.688772 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:23Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.703652 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:23Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.715223 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:23Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.731342 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:23Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.742618 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:23Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.753821 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:23Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.765982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.766021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.766030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.766045 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.766055 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:23Z","lastTransitionTime":"2025-10-02T18:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.767511 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:23Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.781542 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:23Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.868733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.868797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.868810 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.868833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.868856 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:23Z","lastTransitionTime":"2025-10-02T18:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.971255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.971313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.971324 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.971340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:23 crc kubenswrapper[4832]: I1002 18:21:23.971349 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:23Z","lastTransitionTime":"2025-10-02T18:21:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.074021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.074187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.074205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.074229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.074246 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:24Z","lastTransitionTime":"2025-10-02T18:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.176853 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.176911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.176920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.176936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.176945 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:24Z","lastTransitionTime":"2025-10-02T18:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.222331 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:24 crc kubenswrapper[4832]: E1002 18:21:24.222476 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.279886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.279956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.279969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.279991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.280030 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:24Z","lastTransitionTime":"2025-10-02T18:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.383233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.383291 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.383303 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.383321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.383334 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:24Z","lastTransitionTime":"2025-10-02T18:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.485766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.485813 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.485826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.485847 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.485862 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:24Z","lastTransitionTime":"2025-10-02T18:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.560760 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" event={"ID":"ec0c328b-b145-450a-aace-06bb839a1a02","Type":"ContainerStarted","Data":"ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5"} Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.588915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.588957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.588973 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.588991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.589004 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:24Z","lastTransitionTime":"2025-10-02T18:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.609293 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:24Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.621872 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:24Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.636860 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:24Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.651373 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:24Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.664722 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:24Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.679805 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:24Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.691452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.691506 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.691522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.691541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.691554 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:24Z","lastTransitionTime":"2025-10-02T18:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.692614 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:24Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.707884 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:24Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.720660 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:24Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.731815 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:24Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.744158 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:24Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.759015 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:24Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.772039 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:24Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.785200 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:24Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.794056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.794113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.794125 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.794143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.794155 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:24Z","lastTransitionTime":"2025-10-02T18:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.896637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.896691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.896703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.896721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.896739 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:24Z","lastTransitionTime":"2025-10-02T18:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.980080 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.980172 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.980197 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.980218 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:24 crc kubenswrapper[4832]: E1002 18:21:24.980235 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:21:40.980214967 +0000 UTC m=+57.949657839 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.980275 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:24 crc kubenswrapper[4832]: E1002 18:21:24.980336 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:21:24 crc kubenswrapper[4832]: E1002 18:21:24.980345 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:21:24 crc kubenswrapper[4832]: E1002 18:21:24.980375 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:21:24 crc kubenswrapper[4832]: E1002 18:21:24.980390 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:21:24 crc kubenswrapper[4832]: E1002 18:21:24.980401 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:24 crc kubenswrapper[4832]: E1002 18:21:24.980390 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:40.980378151 +0000 UTC m=+57.949821013 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:21:24 crc kubenswrapper[4832]: E1002 18:21:24.980432 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:40.980426213 +0000 UTC m=+57.949869085 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:21:24 crc kubenswrapper[4832]: E1002 18:21:24.980442 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:40.980437334 +0000 UTC m=+57.949880206 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:24 crc kubenswrapper[4832]: E1002 18:21:24.980470 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:21:24 crc kubenswrapper[4832]: E1002 18:21:24.980539 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:21:24 crc kubenswrapper[4832]: E1002 18:21:24.980556 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:24 crc kubenswrapper[4832]: E1002 18:21:24.980649 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 18:21:40.980622939 +0000 UTC m=+57.950065991 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.999349 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.999404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.999422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.999444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:24 crc kubenswrapper[4832]: I1002 18:21:24.999455 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:24Z","lastTransitionTime":"2025-10-02T18:21:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.102948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.102997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.103015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.103043 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.103071 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:25Z","lastTransitionTime":"2025-10-02T18:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.205450 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.205496 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.205508 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.205526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.205538 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:25Z","lastTransitionTime":"2025-10-02T18:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.223115 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.223229 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:25 crc kubenswrapper[4832]: E1002 18:21:25.223611 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.223950 4832 scope.go:117] "RemoveContainer" containerID="08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263" Oct 02 18:21:25 crc kubenswrapper[4832]: E1002 18:21:25.224121 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.249493 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.265748 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.280510 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.294152 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.306403 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.308497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.308537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.308549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.308572 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.308588 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:25Z","lastTransitionTime":"2025-10-02T18:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.325852 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.339569 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.355742 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.370522 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.386608 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.401702 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.410998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.411028 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.411037 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.411052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.411064 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:25Z","lastTransitionTime":"2025-10-02T18:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.415201 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.430496 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.444321 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.512931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.512965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.512974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.512989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.512999 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:25Z","lastTransitionTime":"2025-10-02T18:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.565368 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/0.log" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.567685 4832 generic.go:334] "Generic (PLEG): container finished" podID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerID="dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf" exitCode=1 Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.567722 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerDied","Data":"dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf"} Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.568352 4832 scope.go:117] "RemoveContainer" containerID="dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.587807 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.606395 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.615177 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.615210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.615220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.615237 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.615248 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:25Z","lastTransitionTime":"2025-10-02T18:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.624425 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.641661 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.656594 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.673743 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.708542 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:25Z\\\",\\\"message\\\":\\\"ctor *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223030 6092 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223348 6092 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223484 6092 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223803 6092 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 18:21:25.223914 6092 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1002 18:21:25.223943 6092 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 18:21:25.223950 6092 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 18:21:25.223986 6092 factory.go:656] Stopping watch factory\\\\nI1002 18:21:25.224009 6092 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 18:21:25.224020 6092 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:25.224030 6092 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1002 18:21:25.224038 6092 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.717622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.717694 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.717707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.717725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.717738 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:25Z","lastTransitionTime":"2025-10-02T18:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.723962 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.735738 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.755221 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.770923 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.787116 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.804408 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.820120 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.820165 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.820177 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.820194 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.820205 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:25Z","lastTransitionTime":"2025-10-02T18:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.820741 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:25Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.923110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.923169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.923185 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.923202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:25 crc kubenswrapper[4832]: I1002 18:21:25.923214 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:25Z","lastTransitionTime":"2025-10-02T18:21:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.026926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.026990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.027008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.027035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.027052 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.130753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.130845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.130873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.130909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.130934 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.150570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.150621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.150635 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.150652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.150664 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: E1002 18:21:26.168437 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.173549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.173606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.173619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.173639 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.173651 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: E1002 18:21:26.192235 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.199414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.199463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.199474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.199495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.199508 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: E1002 18:21:26.218940 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.222203 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:26 crc kubenswrapper[4832]: E1002 18:21:26.222410 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.222930 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.222972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.222983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.223000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.223011 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: E1002 18:21:26.236707 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.241664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.241703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.241714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.241734 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.241745 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: E1002 18:21:26.263278 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: E1002 18:21:26.263494 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.265712 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.265891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.265984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.266067 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.266149 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.369900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.369968 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.369991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.370021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.370044 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.397703 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg"] Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.398239 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.398912 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f3b59d5f-e3e4-403f-a165-f83220d4a0de-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-k9hrg\" (UID: \"f3b59d5f-e3e4-403f-a165-f83220d4a0de\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.398960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glkzl\" (UniqueName: \"kubernetes.io/projected/f3b59d5f-e3e4-403f-a165-f83220d4a0de-kube-api-access-glkzl\") pod \"ovnkube-control-plane-749d76644c-k9hrg\" (UID: \"f3b59d5f-e3e4-403f-a165-f83220d4a0de\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.398996 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f3b59d5f-e3e4-403f-a165-f83220d4a0de-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-k9hrg\" (UID: \"f3b59d5f-e3e4-403f-a165-f83220d4a0de\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.399220 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f3b59d5f-e3e4-403f-a165-f83220d4a0de-env-overrides\") pod \"ovnkube-control-plane-749d76644c-k9hrg\" (UID: \"f3b59d5f-e3e4-403f-a165-f83220d4a0de\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.400356 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.400895 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.416989 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.436014 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.450640 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.465947 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.472909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.472969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.472986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.473009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.473025 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.486455 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.499900 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f3b59d5f-e3e4-403f-a165-f83220d4a0de-env-overrides\") pod \"ovnkube-control-plane-749d76644c-k9hrg\" (UID: \"f3b59d5f-e3e4-403f-a165-f83220d4a0de\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.499986 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f3b59d5f-e3e4-403f-a165-f83220d4a0de-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-k9hrg\" (UID: \"f3b59d5f-e3e4-403f-a165-f83220d4a0de\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.500129 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glkzl\" (UniqueName: \"kubernetes.io/projected/f3b59d5f-e3e4-403f-a165-f83220d4a0de-kube-api-access-glkzl\") pod \"ovnkube-control-plane-749d76644c-k9hrg\" (UID: \"f3b59d5f-e3e4-403f-a165-f83220d4a0de\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.500167 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f3b59d5f-e3e4-403f-a165-f83220d4a0de-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-k9hrg\" (UID: \"f3b59d5f-e3e4-403f-a165-f83220d4a0de\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.500782 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f3b59d5f-e3e4-403f-a165-f83220d4a0de-env-overrides\") pod \"ovnkube-control-plane-749d76644c-k9hrg\" (UID: \"f3b59d5f-e3e4-403f-a165-f83220d4a0de\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.501630 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f3b59d5f-e3e4-403f-a165-f83220d4a0de-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-k9hrg\" (UID: \"f3b59d5f-e3e4-403f-a165-f83220d4a0de\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.502022 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.507980 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f3b59d5f-e3e4-403f-a165-f83220d4a0de-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-k9hrg\" (UID: \"f3b59d5f-e3e4-403f-a165-f83220d4a0de\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.515892 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glkzl\" (UniqueName: \"kubernetes.io/projected/f3b59d5f-e3e4-403f-a165-f83220d4a0de-kube-api-access-glkzl\") pod \"ovnkube-control-plane-749d76644c-k9hrg\" (UID: \"f3b59d5f-e3e4-403f-a165-f83220d4a0de\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.523307 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.535496 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.544746 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.554460 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.564503 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.572134 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.574711 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708"} Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.575511 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.575633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.575654 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.575666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.575679 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.575691 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.576716 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.577763 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/0.log" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.581764 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerStarted","Data":"74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129"} Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.590000 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.611652 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:25Z\\\",\\\"message\\\":\\\"ctor *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223030 6092 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223348 6092 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223484 6092 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223803 6092 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 18:21:25.223914 6092 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1002 18:21:25.223943 6092 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 18:21:25.223950 6092 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 18:21:25.223986 6092 factory.go:656] Stopping watch factory\\\\nI1002 18:21:25.224009 6092 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 18:21:25.224020 6092 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:25.224030 6092 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1002 18:21:25.224038 6092 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.623780 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.639672 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.654230 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.668988 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.678701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.678754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.678772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.678800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.678818 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.684683 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.702503 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.712612 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.719685 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.734101 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.756581 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:25Z\\\",\\\"message\\\":\\\"ctor *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223030 6092 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223348 6092 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223484 6092 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223803 6092 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 18:21:25.223914 6092 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1002 18:21:25.223943 6092 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 18:21:25.223950 6092 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 18:21:25.223986 6092 factory.go:656] Stopping watch factory\\\\nI1002 18:21:25.224009 6092 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 18:21:25.224020 6092 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:25.224030 6092 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1002 18:21:25.224038 6092 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.768528 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.781852 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.781898 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.781924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.781944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.781959 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.784459 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.804822 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.822167 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.849407 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.861084 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.878569 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:26Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.890750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.890789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.890800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.890819 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.890830 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.993634 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.993676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.993685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.993704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:26 crc kubenswrapper[4832]: I1002 18:21:26.993714 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:26Z","lastTransitionTime":"2025-10-02T18:21:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.096781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.096826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.096839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.096857 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.096867 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:27Z","lastTransitionTime":"2025-10-02T18:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.198636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.198664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.198672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.198686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.198695 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:27Z","lastTransitionTime":"2025-10-02T18:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.222618 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:27 crc kubenswrapper[4832]: E1002 18:21:27.222757 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.222987 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:27 crc kubenswrapper[4832]: E1002 18:21:27.223179 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.301799 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.301848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.301865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.301888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.301906 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:27Z","lastTransitionTime":"2025-10-02T18:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.404458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.404517 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.404529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.404550 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.404562 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:27Z","lastTransitionTime":"2025-10-02T18:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.507663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.507722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.507734 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.507753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.507766 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:27Z","lastTransitionTime":"2025-10-02T18:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.587985 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" event={"ID":"f3b59d5f-e3e4-403f-a165-f83220d4a0de","Type":"ContainerStarted","Data":"880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6"} Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.588252 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" event={"ID":"f3b59d5f-e3e4-403f-a165-f83220d4a0de","Type":"ContainerStarted","Data":"a2fe2d4d1ccc6e93ec1422efc1a03f598e2840e2821f933305fe0b5b5020dea2"} Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.588282 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.610798 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.611218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.611280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.611297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.611316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.611328 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:27Z","lastTransitionTime":"2025-10-02T18:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.636601 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.651407 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.679525 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:25Z\\\",\\\"message\\\":\\\"ctor *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223030 6092 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223348 6092 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223484 6092 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223803 6092 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 18:21:25.223914 6092 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1002 18:21:25.223943 6092 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 18:21:25.223950 6092 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 18:21:25.223986 6092 factory.go:656] Stopping watch factory\\\\nI1002 18:21:25.224009 6092 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 18:21:25.224020 6092 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:25.224030 6092 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1002 18:21:25.224038 6092 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.697759 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.712408 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.713657 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.713698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.713708 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.713726 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.713736 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:27Z","lastTransitionTime":"2025-10-02T18:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.730440 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.748534 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.765677 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.785899 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.803911 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.815949 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.815996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.816008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.816024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.816376 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:27Z","lastTransitionTime":"2025-10-02T18:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.823497 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.840928 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.856254 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.869593 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.899200 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-m27c2"] Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.899785 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:27 crc kubenswrapper[4832]: E1002 18:21:27.899870 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.913749 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs\") pod \"network-metrics-daemon-m27c2\" (UID: \"8adcf2d1-6a80-40e8-a94b-627c2b18443f\") " pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.913819 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd6lr\" (UniqueName: \"kubernetes.io/projected/8adcf2d1-6a80-40e8-a94b-627c2b18443f-kube-api-access-cd6lr\") pod \"network-metrics-daemon-m27c2\" (UID: \"8adcf2d1-6a80-40e8-a94b-627c2b18443f\") " pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.915052 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.918824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.918877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.918892 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.918911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.919241 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:27Z","lastTransitionTime":"2025-10-02T18:21:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.927965 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.938892 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.953212 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.975373 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:25Z\\\",\\\"message\\\":\\\"ctor *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223030 6092 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223348 6092 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223484 6092 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223803 6092 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 18:21:25.223914 6092 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1002 18:21:25.223943 6092 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 18:21:25.223950 6092 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 18:21:25.223986 6092 factory.go:656] Stopping watch factory\\\\nI1002 18:21:25.224009 6092 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 18:21:25.224020 6092 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:25.224030 6092 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1002 18:21:25.224038 6092 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:27 crc kubenswrapper[4832]: I1002 18:21:27.995866 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:27Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.008333 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.015394 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd6lr\" (UniqueName: \"kubernetes.io/projected/8adcf2d1-6a80-40e8-a94b-627c2b18443f-kube-api-access-cd6lr\") pod \"network-metrics-daemon-m27c2\" (UID: \"8adcf2d1-6a80-40e8-a94b-627c2b18443f\") " pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.015452 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs\") pod \"network-metrics-daemon-m27c2\" (UID: \"8adcf2d1-6a80-40e8-a94b-627c2b18443f\") " pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:28 crc kubenswrapper[4832]: E1002 18:21:28.015711 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:21:28 crc kubenswrapper[4832]: E1002 18:21:28.015826 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs podName:8adcf2d1-6a80-40e8-a94b-627c2b18443f nodeName:}" failed. No retries permitted until 2025-10-02 18:21:28.515801311 +0000 UTC m=+45.485244203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs") pod "network-metrics-daemon-m27c2" (UID: "8adcf2d1-6a80-40e8-a94b-627c2b18443f") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.021104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.021142 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.021156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.021175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.021189 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:28Z","lastTransitionTime":"2025-10-02T18:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.023386 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.032739 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd6lr\" (UniqueName: \"kubernetes.io/projected/8adcf2d1-6a80-40e8-a94b-627c2b18443f-kube-api-access-cd6lr\") pod \"network-metrics-daemon-m27c2\" (UID: \"8adcf2d1-6a80-40e8-a94b-627c2b18443f\") " pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.038529 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.050187 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.070286 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.081794 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.094953 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.110885 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.123861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.123914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.123925 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.123946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.123959 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:28Z","lastTransitionTime":"2025-10-02T18:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.127967 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.142850 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.221875 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:28 crc kubenswrapper[4832]: E1002 18:21:28.222018 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.226614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.226648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.226659 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.226674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.226685 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:28Z","lastTransitionTime":"2025-10-02T18:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.329157 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.329203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.329215 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.329232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.329242 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:28Z","lastTransitionTime":"2025-10-02T18:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.431328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.431389 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.431405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.431427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.431442 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:28Z","lastTransitionTime":"2025-10-02T18:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.521485 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs\") pod \"network-metrics-daemon-m27c2\" (UID: \"8adcf2d1-6a80-40e8-a94b-627c2b18443f\") " pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:28 crc kubenswrapper[4832]: E1002 18:21:28.521783 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:21:28 crc kubenswrapper[4832]: E1002 18:21:28.521914 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs podName:8adcf2d1-6a80-40e8-a94b-627c2b18443f nodeName:}" failed. No retries permitted until 2025-10-02 18:21:29.521888844 +0000 UTC m=+46.491331716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs") pod "network-metrics-daemon-m27c2" (UID: "8adcf2d1-6a80-40e8-a94b-627c2b18443f") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.534105 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.534175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.534189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.534212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.534233 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:28Z","lastTransitionTime":"2025-10-02T18:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.594545 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" event={"ID":"f3b59d5f-e3e4-403f-a165-f83220d4a0de","Type":"ContainerStarted","Data":"58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e"} Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.596679 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/1.log" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.597345 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/0.log" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.600015 4832 generic.go:334] "Generic (PLEG): container finished" podID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerID="74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129" exitCode=1 Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.600055 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerDied","Data":"74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129"} Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.600089 4832 scope.go:117] "RemoveContainer" containerID="dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.600925 4832 scope.go:117] "RemoveContainer" containerID="74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129" Oct 02 18:21:28 crc kubenswrapper[4832]: E1002 18:21:28.601097 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.615513 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.637249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.637326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.637337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.637358 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.637370 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:28Z","lastTransitionTime":"2025-10-02T18:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.641576 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:25Z\\\",\\\"message\\\":\\\"ctor *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223030 6092 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223348 6092 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223484 6092 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223803 6092 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 18:21:25.223914 6092 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1002 18:21:25.223943 6092 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 18:21:25.223950 6092 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 18:21:25.223986 6092 factory.go:656] Stopping watch factory\\\\nI1002 18:21:25.224009 6092 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 18:21:25.224020 6092 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:25.224030 6092 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1002 18:21:25.224038 6092 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.656089 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.672442 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.687892 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.701732 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.714492 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.730701 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.740408 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.740459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.740469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.740487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.740502 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:28Z","lastTransitionTime":"2025-10-02T18:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.744844 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.759373 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.776670 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.790497 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.803004 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.816899 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.829839 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.843481 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.843536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.843549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.843567 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.843546 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.843581 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:28Z","lastTransitionTime":"2025-10-02T18:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.857501 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.880018 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:25Z\\\",\\\"message\\\":\\\"ctor *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223030 6092 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223348 6092 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223484 6092 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223803 6092 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 18:21:25.223914 6092 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1002 18:21:25.223943 6092 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 18:21:25.223950 6092 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 18:21:25.223986 6092 factory.go:656] Stopping watch factory\\\\nI1002 18:21:25.224009 6092 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 18:21:25.224020 6092 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:25.224030 6092 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1002 18:21:25.224038 6092 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"message\\\":\\\" 6285 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:28.293613 6285 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1002 18:21:28.293620 6285 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-m27c2]\\\\nI1002 18:21:28.293629 6285 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1002 18:21:28.293647 6285 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-m27c2 before timer (time: 2025-10-02 18:21:29.575133677 +0000 UTC m=+1.927420082): skip\\\\nI1002 18:21:28.293662 6285 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 42.961µs)\\\\nI1002 18:21:28.310025 6285 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 18:21:28.310117 6285 factory.go:656] Stopping watch factory\\\\nI1002 18:21:28.310138 6285 ovnkube.go:599] Stopped ovnkube\\\\nI1002 18:21:28.310195 6285 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 18:21:28.310224 6285 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1002 18:21:28.310388 6285 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.893035 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.911151 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.926317 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.942777 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.945558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.945585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.945594 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.945609 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.945617 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:28Z","lastTransitionTime":"2025-10-02T18:21:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.959178 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.975032 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:28 crc kubenswrapper[4832]: I1002 18:21:28.989621 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:28Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.006199 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:29Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.023161 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:29Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.036742 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:29Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.048058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.048097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.048108 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.048122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.048132 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:29Z","lastTransitionTime":"2025-10-02T18:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.050444 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:29Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.066325 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:29Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.080461 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:29Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.098800 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:29Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.151847 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.151915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.151932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.151957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.151975 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:29Z","lastTransitionTime":"2025-10-02T18:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.222631 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:29 crc kubenswrapper[4832]: E1002 18:21:29.222816 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.223023 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:29 crc kubenswrapper[4832]: E1002 18:21:29.223309 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.255489 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.255566 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.255582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.255624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.255641 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:29Z","lastTransitionTime":"2025-10-02T18:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.358649 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.359088 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.359180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.359346 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.359459 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:29Z","lastTransitionTime":"2025-10-02T18:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.462090 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.462356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.462431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.462536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.462622 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:29Z","lastTransitionTime":"2025-10-02T18:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.534600 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs\") pod \"network-metrics-daemon-m27c2\" (UID: \"8adcf2d1-6a80-40e8-a94b-627c2b18443f\") " pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:29 crc kubenswrapper[4832]: E1002 18:21:29.534836 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:21:29 crc kubenswrapper[4832]: E1002 18:21:29.534926 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs podName:8adcf2d1-6a80-40e8-a94b-627c2b18443f nodeName:}" failed. No retries permitted until 2025-10-02 18:21:31.534899339 +0000 UTC m=+48.504342251 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs") pod "network-metrics-daemon-m27c2" (UID: "8adcf2d1-6a80-40e8-a94b-627c2b18443f") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.567352 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.567405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.567415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.567435 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.567450 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:29Z","lastTransitionTime":"2025-10-02T18:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.605248 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/1.log" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.670514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.670572 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.670595 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.670626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.670649 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:29Z","lastTransitionTime":"2025-10-02T18:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.773496 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.773810 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.773890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.774007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.774078 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:29Z","lastTransitionTime":"2025-10-02T18:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.877792 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.877845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.877860 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.877880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.877893 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:29Z","lastTransitionTime":"2025-10-02T18:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.979878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.979909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.979918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.979931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:29 crc kubenswrapper[4832]: I1002 18:21:29.979942 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:29Z","lastTransitionTime":"2025-10-02T18:21:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.082827 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.082889 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.082909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.082936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.082953 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:30Z","lastTransitionTime":"2025-10-02T18:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.186098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.186141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.186154 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.186171 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.186182 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:30Z","lastTransitionTime":"2025-10-02T18:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.222971 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:30 crc kubenswrapper[4832]: E1002 18:21:30.223131 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.223532 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:30 crc kubenswrapper[4832]: E1002 18:21:30.223744 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.289797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.290098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.290186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.290308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.290405 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:30Z","lastTransitionTime":"2025-10-02T18:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.393559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.393932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.394091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.394219 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.394358 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:30Z","lastTransitionTime":"2025-10-02T18:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.496986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.497032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.497044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.497061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.497073 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:30Z","lastTransitionTime":"2025-10-02T18:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.600058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.600100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.600112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.600130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.600143 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:30Z","lastTransitionTime":"2025-10-02T18:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.702575 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.703492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.703576 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.703615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.703633 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:30Z","lastTransitionTime":"2025-10-02T18:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.806977 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.807069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.807085 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.807130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.807148 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:30Z","lastTransitionTime":"2025-10-02T18:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.910752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.910817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.910829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.910848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:30 crc kubenswrapper[4832]: I1002 18:21:30.910862 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:30Z","lastTransitionTime":"2025-10-02T18:21:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.013868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.013912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.013923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.013939 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.013952 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:31Z","lastTransitionTime":"2025-10-02T18:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.118189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.118253 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.118326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.118361 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.118384 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:31Z","lastTransitionTime":"2025-10-02T18:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.220963 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.221019 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.221033 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.221055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.221073 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:31Z","lastTransitionTime":"2025-10-02T18:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.221789 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:31 crc kubenswrapper[4832]: E1002 18:21:31.222013 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.222103 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:31 crc kubenswrapper[4832]: E1002 18:21:31.222387 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.323828 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.323879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.323890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.323911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.323925 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:31Z","lastTransitionTime":"2025-10-02T18:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.427419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.427470 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.427480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.427498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.427510 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:31Z","lastTransitionTime":"2025-10-02T18:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.530476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.530548 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.530600 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.530623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.530648 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:31Z","lastTransitionTime":"2025-10-02T18:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.558864 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs\") pod \"network-metrics-daemon-m27c2\" (UID: \"8adcf2d1-6a80-40e8-a94b-627c2b18443f\") " pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:31 crc kubenswrapper[4832]: E1002 18:21:31.559157 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:21:31 crc kubenswrapper[4832]: E1002 18:21:31.559288 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs podName:8adcf2d1-6a80-40e8-a94b-627c2b18443f nodeName:}" failed. No retries permitted until 2025-10-02 18:21:35.559243424 +0000 UTC m=+52.528686476 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs") pod "network-metrics-daemon-m27c2" (UID: "8adcf2d1-6a80-40e8-a94b-627c2b18443f") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.633815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.633865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.633883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.633904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.633920 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:31Z","lastTransitionTime":"2025-10-02T18:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.736841 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.736888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.736896 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.736915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.736926 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:31Z","lastTransitionTime":"2025-10-02T18:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.839728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.839777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.839787 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.839806 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.839818 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:31Z","lastTransitionTime":"2025-10-02T18:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.942621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.942663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.942673 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.942722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:31 crc kubenswrapper[4832]: I1002 18:21:31.942737 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:31Z","lastTransitionTime":"2025-10-02T18:21:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.045978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.046067 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.046089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.046122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.046147 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:32Z","lastTransitionTime":"2025-10-02T18:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.149384 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.149442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.149456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.149474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.149487 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:32Z","lastTransitionTime":"2025-10-02T18:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.222781 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:32 crc kubenswrapper[4832]: E1002 18:21:32.222936 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.222783 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:32 crc kubenswrapper[4832]: E1002 18:21:32.223064 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.252762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.253029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.253180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.253327 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.253434 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:32Z","lastTransitionTime":"2025-10-02T18:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.356846 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.356911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.356929 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.356953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.356968 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:32Z","lastTransitionTime":"2025-10-02T18:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.459373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.459427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.459439 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.459457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.459470 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:32Z","lastTransitionTime":"2025-10-02T18:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.563067 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.563111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.563121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.563139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.563150 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:32Z","lastTransitionTime":"2025-10-02T18:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.666522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.666575 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.666595 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.666612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.666621 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:32Z","lastTransitionTime":"2025-10-02T18:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.769836 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.769880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.769890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.769905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.769916 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:32Z","lastTransitionTime":"2025-10-02T18:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.873028 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.873088 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.873101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.873120 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.873138 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:32Z","lastTransitionTime":"2025-10-02T18:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.976076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.976144 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.976163 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.976189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:32 crc kubenswrapper[4832]: I1002 18:21:32.976209 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:32Z","lastTransitionTime":"2025-10-02T18:21:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.078957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.079039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.079066 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.079099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.079123 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:33Z","lastTransitionTime":"2025-10-02T18:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.182900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.182962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.182976 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.183003 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.183014 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:33Z","lastTransitionTime":"2025-10-02T18:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.222634 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:33 crc kubenswrapper[4832]: E1002 18:21:33.222824 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.223293 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:33 crc kubenswrapper[4832]: E1002 18:21:33.223525 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.286755 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.287598 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.287686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.287763 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.287847 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:33Z","lastTransitionTime":"2025-10-02T18:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.390821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.390875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.390887 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.390907 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.390920 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:33Z","lastTransitionTime":"2025-10-02T18:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.493886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.493942 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.493956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.493976 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.493990 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:33Z","lastTransitionTime":"2025-10-02T18:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.597777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.598123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.598212 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.598339 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.598429 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:33Z","lastTransitionTime":"2025-10-02T18:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.700585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.700883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.701021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.701126 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.701207 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:33Z","lastTransitionTime":"2025-10-02T18:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.804913 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.804986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.805005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.805035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.805058 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:33Z","lastTransitionTime":"2025-10-02T18:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.907829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.907890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.907906 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.907926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:33 crc kubenswrapper[4832]: I1002 18:21:33.907938 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:33Z","lastTransitionTime":"2025-10-02T18:21:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.011083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.011479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.011498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.011526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.011557 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:34Z","lastTransitionTime":"2025-10-02T18:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.114890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.114945 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.114957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.114976 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.114988 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:34Z","lastTransitionTime":"2025-10-02T18:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.217981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.218036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.218050 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.218071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.218083 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:34Z","lastTransitionTime":"2025-10-02T18:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.222401 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.222479 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:34 crc kubenswrapper[4832]: E1002 18:21:34.222572 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:34 crc kubenswrapper[4832]: E1002 18:21:34.222712 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.320496 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.320543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.320554 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.320570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.320581 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:34Z","lastTransitionTime":"2025-10-02T18:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.423186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.423260 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.423327 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.423358 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.423381 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:34Z","lastTransitionTime":"2025-10-02T18:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.526373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.526873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.526974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.527103 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.527209 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:34Z","lastTransitionTime":"2025-10-02T18:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.630403 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.630461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.630474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.630495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.630507 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:34Z","lastTransitionTime":"2025-10-02T18:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.733824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.733878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.733889 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.733909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.733922 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:34Z","lastTransitionTime":"2025-10-02T18:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.836464 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.837135 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.837220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.837328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.837408 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:34Z","lastTransitionTime":"2025-10-02T18:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.939613 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.939666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.939683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.939704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:34 crc kubenswrapper[4832]: I1002 18:21:34.939717 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:34Z","lastTransitionTime":"2025-10-02T18:21:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.042372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.042684 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.042798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.042876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.042963 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:35Z","lastTransitionTime":"2025-10-02T18:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.146075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.146146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.146164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.146187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.146200 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:35Z","lastTransitionTime":"2025-10-02T18:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.222248 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.222329 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:35 crc kubenswrapper[4832]: E1002 18:21:35.222794 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:35 crc kubenswrapper[4832]: E1002 18:21:35.222841 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.237503 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.248695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.248751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.248762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.248786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.248806 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:35Z","lastTransitionTime":"2025-10-02T18:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.266122 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:25Z\\\",\\\"message\\\":\\\"ctor *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223030 6092 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223348 6092 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223484 6092 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223803 6092 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 18:21:25.223914 6092 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1002 18:21:25.223943 6092 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 18:21:25.223950 6092 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 18:21:25.223986 6092 factory.go:656] Stopping watch factory\\\\nI1002 18:21:25.224009 6092 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 18:21:25.224020 6092 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:25.224030 6092 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1002 18:21:25.224038 6092 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"message\\\":\\\" 6285 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:28.293613 6285 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1002 18:21:28.293620 6285 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-m27c2]\\\\nI1002 18:21:28.293629 6285 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1002 18:21:28.293647 6285 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-m27c2 before timer (time: 2025-10-02 18:21:29.575133677 +0000 UTC m=+1.927420082): skip\\\\nI1002 18:21:28.293662 6285 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 42.961µs)\\\\nI1002 18:21:28.310025 6285 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 18:21:28.310117 6285 factory.go:656] Stopping watch factory\\\\nI1002 18:21:28.310138 6285 ovnkube.go:599] Stopped ovnkube\\\\nI1002 18:21:28.310195 6285 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 18:21:28.310224 6285 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1002 18:21:28.310388 6285 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.278894 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.295601 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.311084 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.327132 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.346400 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.351350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.351404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.351416 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.351449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.351466 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:35Z","lastTransitionTime":"2025-10-02T18:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.360501 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.375847 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.393884 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.416482 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.431932 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.444440 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.454780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.454834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.454848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.454870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.454885 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:35Z","lastTransitionTime":"2025-10-02T18:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.455323 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.466853 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.478443 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:35Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.557953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.558051 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.558065 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.558082 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.558092 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:35Z","lastTransitionTime":"2025-10-02T18:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.609638 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs\") pod \"network-metrics-daemon-m27c2\" (UID: \"8adcf2d1-6a80-40e8-a94b-627c2b18443f\") " pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:35 crc kubenswrapper[4832]: E1002 18:21:35.609882 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:21:35 crc kubenswrapper[4832]: E1002 18:21:35.610230 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs podName:8adcf2d1-6a80-40e8-a94b-627c2b18443f nodeName:}" failed. No retries permitted until 2025-10-02 18:21:43.610208948 +0000 UTC m=+60.579651820 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs") pod "network-metrics-daemon-m27c2" (UID: "8adcf2d1-6a80-40e8-a94b-627c2b18443f") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.660771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.660826 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.660844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.660867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.660886 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:35Z","lastTransitionTime":"2025-10-02T18:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.764088 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.764136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.764145 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.764161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.764171 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:35Z","lastTransitionTime":"2025-10-02T18:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.866533 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.866596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.866606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.866625 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.866635 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:35Z","lastTransitionTime":"2025-10-02T18:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.970049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.970114 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.970131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.970159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:35 crc kubenswrapper[4832]: I1002 18:21:35.970180 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:35Z","lastTransitionTime":"2025-10-02T18:21:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.073067 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.073126 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.073138 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.073155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.073167 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:36Z","lastTransitionTime":"2025-10-02T18:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.176248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.176333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.176350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.176375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.176391 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:36Z","lastTransitionTime":"2025-10-02T18:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.222510 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.222595 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:36 crc kubenswrapper[4832]: E1002 18:21:36.222656 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:36 crc kubenswrapper[4832]: E1002 18:21:36.222765 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.280606 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.280683 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.280697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.280720 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.280737 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:36Z","lastTransitionTime":"2025-10-02T18:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.384559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.385033 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.385135 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.385208 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.385296 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:36Z","lastTransitionTime":"2025-10-02T18:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.440083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.440147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.440166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.440191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.440213 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:36Z","lastTransitionTime":"2025-10-02T18:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:36 crc kubenswrapper[4832]: E1002 18:21:36.460989 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:36Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.466838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.466890 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.466908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.466931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.466953 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:36Z","lastTransitionTime":"2025-10-02T18:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:36 crc kubenswrapper[4832]: E1002 18:21:36.483685 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:36Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.490596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.490646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.490664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.490687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.490705 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:36Z","lastTransitionTime":"2025-10-02T18:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:36 crc kubenswrapper[4832]: E1002 18:21:36.508418 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:36Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.514780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.514918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.514940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.514963 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.515020 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:36Z","lastTransitionTime":"2025-10-02T18:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:36 crc kubenswrapper[4832]: E1002 18:21:36.535954 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:36Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.547409 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.547463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.547614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.547693 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.547713 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:36Z","lastTransitionTime":"2025-10-02T18:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:36 crc kubenswrapper[4832]: E1002 18:21:36.563119 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:36Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:36 crc kubenswrapper[4832]: E1002 18:21:36.563405 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.565580 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.565631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.565641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.565657 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.565667 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:36Z","lastTransitionTime":"2025-10-02T18:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.668696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.668759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.668775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.668797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.668815 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:36Z","lastTransitionTime":"2025-10-02T18:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.771310 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.771354 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.771364 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.771380 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.771389 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:36Z","lastTransitionTime":"2025-10-02T18:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.874076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.874142 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.874158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.874181 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.874199 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:36Z","lastTransitionTime":"2025-10-02T18:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.977356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.977405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.977416 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.977432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:36 crc kubenswrapper[4832]: I1002 18:21:36.977446 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:36Z","lastTransitionTime":"2025-10-02T18:21:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.081113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.081169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.081188 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.081211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.081227 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:37Z","lastTransitionTime":"2025-10-02T18:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.184835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.184897 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.184914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.184937 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.184954 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:37Z","lastTransitionTime":"2025-10-02T18:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.222454 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:37 crc kubenswrapper[4832]: E1002 18:21:37.222663 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.222720 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:37 crc kubenswrapper[4832]: E1002 18:21:37.222865 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.287044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.287109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.287128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.287153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.287171 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:37Z","lastTransitionTime":"2025-10-02T18:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.390310 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.390390 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.390407 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.390431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.390446 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:37Z","lastTransitionTime":"2025-10-02T18:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.493414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.493484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.493508 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.493541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.493561 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:37Z","lastTransitionTime":"2025-10-02T18:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.596422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.596503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.596562 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.596591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.596613 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:37Z","lastTransitionTime":"2025-10-02T18:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.699395 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.699474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.699491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.699995 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.700059 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:37Z","lastTransitionTime":"2025-10-02T18:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.804451 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.804522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.804545 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.804576 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.804599 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:37Z","lastTransitionTime":"2025-10-02T18:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.906883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.906957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.906979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.907009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:37 crc kubenswrapper[4832]: I1002 18:21:37.907032 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:37Z","lastTransitionTime":"2025-10-02T18:21:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.009543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.009619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.009646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.009677 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.009703 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:38Z","lastTransitionTime":"2025-10-02T18:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.111576 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.111614 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.111626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.111642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.111653 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:38Z","lastTransitionTime":"2025-10-02T18:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.214184 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.214220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.214232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.214251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.214283 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:38Z","lastTransitionTime":"2025-10-02T18:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.222454 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.222502 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:38 crc kubenswrapper[4832]: E1002 18:21:38.222589 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:38 crc kubenswrapper[4832]: E1002 18:21:38.222673 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.317368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.317424 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.317438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.317457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.317469 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:38Z","lastTransitionTime":"2025-10-02T18:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.421092 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.421172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.421197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.421308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.421336 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:38Z","lastTransitionTime":"2025-10-02T18:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.524018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.524097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.524123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.524154 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.524172 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:38Z","lastTransitionTime":"2025-10-02T18:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.627164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.627500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.627551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.627580 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.627602 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:38Z","lastTransitionTime":"2025-10-02T18:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.730035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.730117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.730145 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.730175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.730196 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:38Z","lastTransitionTime":"2025-10-02T18:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.833375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.833452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.833461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.833478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.833487 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:38Z","lastTransitionTime":"2025-10-02T18:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.936052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.936100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.936111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.936126 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:38 crc kubenswrapper[4832]: I1002 18:21:38.936135 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:38Z","lastTransitionTime":"2025-10-02T18:21:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.038983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.039063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.039083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.039109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.039126 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:39Z","lastTransitionTime":"2025-10-02T18:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.142354 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.142406 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.142415 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.142428 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.142437 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:39Z","lastTransitionTime":"2025-10-02T18:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.222512 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:39 crc kubenswrapper[4832]: E1002 18:21:39.222769 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.222962 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:39 crc kubenswrapper[4832]: E1002 18:21:39.223137 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.245102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.245159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.245176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.245199 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.245217 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:39Z","lastTransitionTime":"2025-10-02T18:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.330121 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.342606 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.347015 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.347717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.347748 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.347759 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.347773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.347785 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:39Z","lastTransitionTime":"2025-10-02T18:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.364304 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.381541 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.397018 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.422370 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc7baa64e27ba26b26d8b93665f35e1c101bad374090b421bab3e21acbf2d3bf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:25Z\\\",\\\"message\\\":\\\"ctor *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223030 6092 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1002 18:21:25.223348 6092 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223484 6092 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1002 18:21:25.223803 6092 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1002 18:21:25.223914 6092 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1002 18:21:25.223943 6092 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1002 18:21:25.223950 6092 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1002 18:21:25.223986 6092 factory.go:656] Stopping watch factory\\\\nI1002 18:21:25.224009 6092 handler.go:208] Removed *v1.Node event handler 7\\\\nI1002 18:21:25.224020 6092 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:25.224030 6092 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1002 18:21:25.224038 6092 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"message\\\":\\\" 6285 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:28.293613 6285 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1002 18:21:28.293620 6285 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-m27c2]\\\\nI1002 18:21:28.293629 6285 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1002 18:21:28.293647 6285 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-m27c2 before timer (time: 2025-10-02 18:21:29.575133677 +0000 UTC m=+1.927420082): skip\\\\nI1002 18:21:28.293662 6285 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 42.961µs)\\\\nI1002 18:21:28.310025 6285 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 18:21:28.310117 6285 factory.go:656] Stopping watch factory\\\\nI1002 18:21:28.310138 6285 ovnkube.go:599] Stopped ovnkube\\\\nI1002 18:21:28.310195 6285 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 18:21:28.310224 6285 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1002 18:21:28.310388 6285 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.435132 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.451515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.451563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.451576 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.451595 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.451610 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:39Z","lastTransitionTime":"2025-10-02T18:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.455277 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.473440 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.490648 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.506175 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.524954 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.542433 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.553800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.553851 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.553862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.553883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.553896 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:39Z","lastTransitionTime":"2025-10-02T18:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.559669 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.574560 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.588234 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.603120 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:39Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.656355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.656396 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.656409 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.656424 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.656436 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:39Z","lastTransitionTime":"2025-10-02T18:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.758609 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.758665 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.758678 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.758698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.758714 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:39Z","lastTransitionTime":"2025-10-02T18:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.861863 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.861917 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.861934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.861957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.861976 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:39Z","lastTransitionTime":"2025-10-02T18:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.964931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.964998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.965015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.965040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:39 crc kubenswrapper[4832]: I1002 18:21:39.965056 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:39Z","lastTransitionTime":"2025-10-02T18:21:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.068492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.068536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.068547 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.068564 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.068576 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:40Z","lastTransitionTime":"2025-10-02T18:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.171221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.171273 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.171283 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.171299 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.171309 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:40Z","lastTransitionTime":"2025-10-02T18:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.222259 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.222516 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:40 crc kubenswrapper[4832]: E1002 18:21:40.223853 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:40 crc kubenswrapper[4832]: E1002 18:21:40.224210 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.224463 4832 scope.go:117] "RemoveContainer" containerID="74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.243862 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.269008 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"message\\\":\\\" 6285 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:28.293613 6285 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1002 18:21:28.293620 6285 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-m27c2]\\\\nI1002 18:21:28.293629 6285 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1002 18:21:28.293647 6285 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-m27c2 before timer (time: 2025-10-02 18:21:29.575133677 +0000 UTC m=+1.927420082): skip\\\\nI1002 18:21:28.293662 6285 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 42.961µs)\\\\nI1002 18:21:28.310025 6285 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 18:21:28.310117 6285 factory.go:656] Stopping watch factory\\\\nI1002 18:21:28.310138 6285 ovnkube.go:599] Stopped ovnkube\\\\nI1002 18:21:28.310195 6285 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 18:21:28.310224 6285 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1002 18:21:28.310388 6285 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.276997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.277040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.277054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.277075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.277090 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:40Z","lastTransitionTime":"2025-10-02T18:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.287941 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.312574 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.333761 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.351135 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c5105f9-32bb-4e0f-96e4-bee6a87f13aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18857f0c558d210c90f62d12a2fe44432c0e8d56c9a884ef7f8aba75b4b3803b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f43619bd98316172da519f42b12be3b52f40cb038dbf9228b7b5168373c682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf6606019b3529d235c1ace6b9f28f053785daa0770d8553f85a24fddfe15d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.373872 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.392816 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.392897 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.392915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.392940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.392955 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:40Z","lastTransitionTime":"2025-10-02T18:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.405516 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.423854 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.442105 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.458539 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.474338 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.489680 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.495159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.495209 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.495222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.495241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.495253 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:40Z","lastTransitionTime":"2025-10-02T18:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.503057 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.517532 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.531762 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.548230 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.597931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.597980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.597992 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.598013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.598023 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:40Z","lastTransitionTime":"2025-10-02T18:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.648023 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/1.log" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.652235 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerStarted","Data":"869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b"} Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.652421 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.671198 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c5105f9-32bb-4e0f-96e4-bee6a87f13aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18857f0c558d210c90f62d12a2fe44432c0e8d56c9a884ef7f8aba75b4b3803b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f43619bd98316172da519f42b12be3b52f40cb038dbf9228b7b5168373c682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf6606019b3529d235c1ace6b9f28f053785daa0770d8553f85a24fddfe15d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.690374 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.700696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.700753 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.700765 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.700786 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.700799 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:40Z","lastTransitionTime":"2025-10-02T18:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.710391 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.730422 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.746361 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.769720 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.788109 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.802477 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.803790 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.803837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.803849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.803867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.803878 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:40Z","lastTransitionTime":"2025-10-02T18:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.817075 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.828975 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.838202 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.850574 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.865130 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.876175 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.892970 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.907256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.907373 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.907391 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.907418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.907437 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:40Z","lastTransitionTime":"2025-10-02T18:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.918029 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"message\\\":\\\" 6285 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:28.293613 6285 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1002 18:21:28.293620 6285 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-m27c2]\\\\nI1002 18:21:28.293629 6285 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1002 18:21:28.293647 6285 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-m27c2 before timer (time: 2025-10-02 18:21:29.575133677 +0000 UTC m=+1.927420082): skip\\\\nI1002 18:21:28.293662 6285 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 42.961µs)\\\\nI1002 18:21:28.310025 6285 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 18:21:28.310117 6285 factory.go:656] Stopping watch factory\\\\nI1002 18:21:28.310138 6285 ovnkube.go:599] Stopped ovnkube\\\\nI1002 18:21:28.310195 6285 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 18:21:28.310224 6285 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1002 18:21:28.310388 6285 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:40 crc kubenswrapper[4832]: I1002 18:21:40.932702 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:40Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.011136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.011819 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.011836 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.011853 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.011863 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:41Z","lastTransitionTime":"2025-10-02T18:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.077779 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.077877 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.077916 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.077939 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.077959 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.078033 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:22:13.078012992 +0000 UTC m=+90.047455864 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.078059 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.078135 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.078158 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:22:13.078138386 +0000 UTC m=+90.047581258 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.078163 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.078182 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.078196 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.078138 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.078368 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.078392 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.078224 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 18:22:13.078210648 +0000 UTC m=+90.047653520 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.078462 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:22:13.078440115 +0000 UTC m=+90.047883137 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.078486 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 18:22:13.078475697 +0000 UTC m=+90.047918819 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.114807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.114864 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.114876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.114897 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.114912 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:41Z","lastTransitionTime":"2025-10-02T18:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.217952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.217999 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.218010 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.218030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.218043 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:41Z","lastTransitionTime":"2025-10-02T18:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.222875 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.222960 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.223152 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.223320 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.321099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.321149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.321161 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.321181 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.321194 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:41Z","lastTransitionTime":"2025-10-02T18:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.423959 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.424033 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.424047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.424068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.424081 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:41Z","lastTransitionTime":"2025-10-02T18:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.527479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.527552 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.527568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.527598 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.527612 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:41Z","lastTransitionTime":"2025-10-02T18:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.631121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.631177 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.631189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.631213 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.631228 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:41Z","lastTransitionTime":"2025-10-02T18:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.659787 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/2.log" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.660516 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/1.log" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.663966 4832 generic.go:334] "Generic (PLEG): container finished" podID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerID="869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b" exitCode=1 Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.664023 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerDied","Data":"869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b"} Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.664077 4832 scope.go:117] "RemoveContainer" containerID="74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.665034 4832 scope.go:117] "RemoveContainer" containerID="869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b" Oct 02 18:21:41 crc kubenswrapper[4832]: E1002 18:21:41.665298 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.684687 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.704220 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.720833 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.735320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.735363 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.735375 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.735394 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.735407 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:41Z","lastTransitionTime":"2025-10-02T18:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.736912 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.752755 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.769150 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.773560 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.783434 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c5105f9-32bb-4e0f-96e4-bee6a87f13aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18857f0c558d210c90f62d12a2fe44432c0e8d56c9a884ef7f8aba75b4b3803b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f43619bd98316172da519f42b12be3b52f40cb038dbf9228b7b5168373c682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf6606019b3529d235c1ace6b9f28f053785daa0770d8553f85a24fddfe15d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.799626 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.813851 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.830651 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.838115 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.838172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.838186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.838220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.838231 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:41Z","lastTransitionTime":"2025-10-02T18:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.846855 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.860308 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.874126 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.893512 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.908442 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.930501 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"message\\\":\\\" 6285 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:28.293613 6285 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1002 18:21:28.293620 6285 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-m27c2]\\\\nI1002 18:21:28.293629 6285 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1002 18:21:28.293647 6285 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-m27c2 before timer (time: 2025-10-02 18:21:29.575133677 +0000 UTC m=+1.927420082): skip\\\\nI1002 18:21:28.293662 6285 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 42.961µs)\\\\nI1002 18:21:28.310025 6285 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 18:21:28.310117 6285 factory.go:656] Stopping watch factory\\\\nI1002 18:21:28.310138 6285 ovnkube.go:599] Stopped ovnkube\\\\nI1002 18:21:28.310195 6285 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 18:21:28.310224 6285 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1002 18:21:28.310388 6285 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"message\\\":\\\"d openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1002 18:21:41.181615 6512 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1002 18:21:41.181619 6512 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z]\\\\nI1002 18:21:41.181622 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-m27c2\\\\nI1002 18:21:41.181635 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-fjjsq\\\\nI1002 18:21:41.181642 6512 obj_retr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.941636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.941920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.942075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.942199 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.942335 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:41Z","lastTransitionTime":"2025-10-02T18:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.946760 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.964607 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.980436 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:41 crc kubenswrapper[4832]: I1002 18:21:41.994189 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.006819 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:42Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.023723 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:42Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.045702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.045796 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.045810 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.045833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.045854 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:42Z","lastTransitionTime":"2025-10-02T18:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.047113 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74fd5a25bbeae0e6bbdf7e100aedbe597dee2b2b81a9d5efcf547e7fe2935129\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"message\\\":\\\" 6285 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1002 18:21:28.293613 6285 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1002 18:21:28.293620 6285 obj_retry.go:409] Going to retry *v1.Pod resource setup for 1 objects: [openshift-multus/network-metrics-daemon-m27c2]\\\\nI1002 18:21:28.293629 6285 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1002 18:21:28.293647 6285 obj_retry.go:285] Attempting retry of *v1.Pod openshift-multus/network-metrics-daemon-m27c2 before timer (time: 2025-10-02 18:21:29.575133677 +0000 UTC m=+1.927420082): skip\\\\nI1002 18:21:28.293662 6285 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 42.961µs)\\\\nI1002 18:21:28.310025 6285 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1002 18:21:28.310117 6285 factory.go:656] Stopping watch factory\\\\nI1002 18:21:28.310138 6285 ovnkube.go:599] Stopped ovnkube\\\\nI1002 18:21:28.310195 6285 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1002 18:21:28.310224 6285 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1002 18:21:28.310388 6285 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"message\\\":\\\"d openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1002 18:21:41.181615 6512 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1002 18:21:41.181619 6512 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z]\\\\nI1002 18:21:41.181622 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-m27c2\\\\nI1002 18:21:41.181635 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-fjjsq\\\\nI1002 18:21:41.181642 6512 obj_retr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:42Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.063570 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:42Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.076805 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:42Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.090645 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:42Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.108626 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:42Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.126736 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:42Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.143323 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c5105f9-32bb-4e0f-96e4-bee6a87f13aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18857f0c558d210c90f62d12a2fe44432c0e8d56c9a884ef7f8aba75b4b3803b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f43619bd98316172da519f42b12be3b52f40cb038dbf9228b7b5168373c682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf6606019b3529d235c1ace6b9f28f053785daa0770d8553f85a24fddfe15d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:42Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.150739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.150790 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.150800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.150818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.150831 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:42Z","lastTransitionTime":"2025-10-02T18:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.168898 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:42Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.184590 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:42Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.202634 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:42Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.217315 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:42Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.221848 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.222082 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:42 crc kubenswrapper[4832]: E1002 18:21:42.222197 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:42 crc kubenswrapper[4832]: E1002 18:21:42.222195 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.229146 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:42Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.254344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.254389 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.254400 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.254417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.254431 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:42Z","lastTransitionTime":"2025-10-02T18:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.357952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.358001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.358012 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.358032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.358044 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:42Z","lastTransitionTime":"2025-10-02T18:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.461861 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.461918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.461933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.461962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.461976 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:42Z","lastTransitionTime":"2025-10-02T18:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.565169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.565230 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.565241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.565288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.565303 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:42Z","lastTransitionTime":"2025-10-02T18:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.667631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.667695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.667709 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.667733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.667746 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:42Z","lastTransitionTime":"2025-10-02T18:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.670504 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/2.log" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.770916 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.770974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.770984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.771002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.771017 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:42Z","lastTransitionTime":"2025-10-02T18:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.875234 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.875312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.875326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.875355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.875382 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:42Z","lastTransitionTime":"2025-10-02T18:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.978956 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.978996 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.979025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.979043 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:42 crc kubenswrapper[4832]: I1002 18:21:42.979054 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:42Z","lastTransitionTime":"2025-10-02T18:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.082500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.082553 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.082568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.082590 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.082606 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:43Z","lastTransitionTime":"2025-10-02T18:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.187615 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.187672 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.187685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.187707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.187722 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:43Z","lastTransitionTime":"2025-10-02T18:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.222456 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.222598 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:43 crc kubenswrapper[4832]: E1002 18:21:43.222659 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:43 crc kubenswrapper[4832]: E1002 18:21:43.222822 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.290872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.290915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.290927 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.290944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.290957 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:43Z","lastTransitionTime":"2025-10-02T18:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.394413 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.394463 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.394474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.394500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.394516 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:43Z","lastTransitionTime":"2025-10-02T18:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.497412 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.497465 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.497478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.497499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.497516 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:43Z","lastTransitionTime":"2025-10-02T18:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.536857 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.538184 4832 scope.go:117] "RemoveContainer" containerID="869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b" Oct 02 18:21:43 crc kubenswrapper[4832]: E1002 18:21:43.538470 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.552078 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.563857 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.573310 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.584977 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.600207 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.600310 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.600323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.600350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.600365 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:43Z","lastTransitionTime":"2025-10-02T18:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.610782 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"message\\\":\\\"d openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1002 18:21:41.181615 6512 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1002 18:21:41.181619 6512 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z]\\\\nI1002 18:21:41.181622 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-m27c2\\\\nI1002 18:21:41.181635 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-fjjsq\\\\nI1002 18:21:41.181642 6512 obj_retr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.624345 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.637128 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.653797 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.671411 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.689519 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.702916 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.702972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.702983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.703004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.703018 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:43Z","lastTransitionTime":"2025-10-02T18:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.706584 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs\") pod \"network-metrics-daemon-m27c2\" (UID: \"8adcf2d1-6a80-40e8-a94b-627c2b18443f\") " pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:43 crc kubenswrapper[4832]: E1002 18:21:43.706753 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:21:43 crc kubenswrapper[4832]: E1002 18:21:43.706848 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs podName:8adcf2d1-6a80-40e8-a94b-627c2b18443f nodeName:}" failed. No retries permitted until 2025-10-02 18:21:59.706824007 +0000 UTC m=+76.676267039 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs") pod "network-metrics-daemon-m27c2" (UID: "8adcf2d1-6a80-40e8-a94b-627c2b18443f") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.707471 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c5105f9-32bb-4e0f-96e4-bee6a87f13aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18857f0c558d210c90f62d12a2fe44432c0e8d56c9a884ef7f8aba75b4b3803b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f43619bd98316172da519f42b12be3b52f40cb038dbf9228b7b5168373c682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf6606019b3529d235c1ace6b9f28f053785daa0770d8553f85a24fddfe15d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.726044 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.742133 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.760214 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.776348 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.791101 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.805458 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:43Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.806355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.806513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.806591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.806658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.806717 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:43Z","lastTransitionTime":"2025-10-02T18:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.913109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.913149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.913162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.913183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:43 crc kubenswrapper[4832]: I1002 18:21:43.913198 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:43Z","lastTransitionTime":"2025-10-02T18:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.015968 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.016023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.016035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.016054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.016066 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:44Z","lastTransitionTime":"2025-10-02T18:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.119715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.119762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.119771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.119792 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.119806 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:44Z","lastTransitionTime":"2025-10-02T18:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.221836 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.221890 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:44 crc kubenswrapper[4832]: E1002 18:21:44.222502 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:44 crc kubenswrapper[4832]: E1002 18:21:44.222408 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.222239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.222625 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.222652 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.222682 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.222734 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:44Z","lastTransitionTime":"2025-10-02T18:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.325288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.325676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.325686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.325704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.325715 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:44Z","lastTransitionTime":"2025-10-02T18:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.428862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.429343 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.429428 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.429546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.429642 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:44Z","lastTransitionTime":"2025-10-02T18:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.532837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.533289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.533389 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.533513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.533590 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:44Z","lastTransitionTime":"2025-10-02T18:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.636909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.636957 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.636970 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.636990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.637002 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:44Z","lastTransitionTime":"2025-10-02T18:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.740427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.740477 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.740490 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.740510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.740524 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:44Z","lastTransitionTime":"2025-10-02T18:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.844154 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.844198 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.844208 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.844226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.844239 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:44Z","lastTransitionTime":"2025-10-02T18:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.947337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.947401 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.947410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.947430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:44 crc kubenswrapper[4832]: I1002 18:21:44.947448 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:44Z","lastTransitionTime":"2025-10-02T18:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.050635 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.050689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.050702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.050723 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.050737 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:45Z","lastTransitionTime":"2025-10-02T18:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.154738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.154811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.154830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.154856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.154875 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:45Z","lastTransitionTime":"2025-10-02T18:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.221954 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.222063 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:45 crc kubenswrapper[4832]: E1002 18:21:45.223765 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:45 crc kubenswrapper[4832]: E1002 18:21:45.224697 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.240887 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.256038 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.259845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.259979 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.260055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.260135 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.260220 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:45Z","lastTransitionTime":"2025-10-02T18:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.272836 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.290018 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.311222 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"message\\\":\\\"d openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1002 18:21:41.181615 6512 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1002 18:21:41.181619 6512 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z]\\\\nI1002 18:21:41.181622 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-m27c2\\\\nI1002 18:21:41.181635 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-fjjsq\\\\nI1002 18:21:41.181642 6512 obj_retr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.321437 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.335554 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.352356 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.363121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.363209 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.363223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.363243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.363278 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:45Z","lastTransitionTime":"2025-10-02T18:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.366779 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.381117 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.396627 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.411172 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.428822 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c5105f9-32bb-4e0f-96e4-bee6a87f13aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18857f0c558d210c90f62d12a2fe44432c0e8d56c9a884ef7f8aba75b4b3803b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f43619bd98316172da519f42b12be3b52f40cb038dbf9228b7b5168373c682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf6606019b3529d235c1ace6b9f28f053785daa0770d8553f85a24fddfe15d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.444381 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.459275 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.465587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.465620 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.465628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.465641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.465650 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:45Z","lastTransitionTime":"2025-10-02T18:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.470413 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.486887 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:45Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.567227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.567294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.567308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.567325 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.567339 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:45Z","lastTransitionTime":"2025-10-02T18:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.670010 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.670064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.670075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.670097 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.670109 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:45Z","lastTransitionTime":"2025-10-02T18:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.773293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.773336 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.773345 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.773365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.773374 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:45Z","lastTransitionTime":"2025-10-02T18:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.875632 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.875686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.875703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.875726 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.875742 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:45Z","lastTransitionTime":"2025-10-02T18:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.979938 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.980007 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.980025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.980053 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:45 crc kubenswrapper[4832]: I1002 18:21:45.980074 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:45Z","lastTransitionTime":"2025-10-02T18:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.083227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.083323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.083341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.083368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.083383 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:46Z","lastTransitionTime":"2025-10-02T18:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.186444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.186484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.186493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.186507 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.186519 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:46Z","lastTransitionTime":"2025-10-02T18:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.222009 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.222093 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:46 crc kubenswrapper[4832]: E1002 18:21:46.222630 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:46 crc kubenswrapper[4832]: E1002 18:21:46.222782 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.289311 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.289358 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.289367 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.289384 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.289394 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:46Z","lastTransitionTime":"2025-10-02T18:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.392405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.392869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.393076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.393338 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.393586 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:46Z","lastTransitionTime":"2025-10-02T18:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.497236 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.497303 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.497314 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.497337 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.497349 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:46Z","lastTransitionTime":"2025-10-02T18:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.601088 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.601140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.601151 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.601169 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.601180 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:46Z","lastTransitionTime":"2025-10-02T18:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.651478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.651539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.651558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.651644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.651666 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:46Z","lastTransitionTime":"2025-10-02T18:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:46 crc kubenswrapper[4832]: E1002 18:21:46.667331 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:46Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.672771 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.672843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.672857 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.672900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.672913 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:46Z","lastTransitionTime":"2025-10-02T18:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:46 crc kubenswrapper[4832]: E1002 18:21:46.691407 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:46Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.697172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.697222 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.697238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.697283 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.697300 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:46Z","lastTransitionTime":"2025-10-02T18:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:46 crc kubenswrapper[4832]: E1002 18:21:46.716559 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:46Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.721470 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.721524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.721543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.721570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.721591 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:46Z","lastTransitionTime":"2025-10-02T18:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:46 crc kubenswrapper[4832]: E1002 18:21:46.741606 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:46Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.747498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.747555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.747582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.747612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.747636 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:46Z","lastTransitionTime":"2025-10-02T18:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:46 crc kubenswrapper[4832]: E1002 18:21:46.774376 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:46Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:46 crc kubenswrapper[4832]: E1002 18:21:46.774625 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.776783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.776896 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.776917 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.776948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.776985 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:46Z","lastTransitionTime":"2025-10-02T18:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.880706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.880743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.880754 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.880772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.880785 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:46Z","lastTransitionTime":"2025-10-02T18:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.984055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.984132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.984160 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.984191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:46 crc kubenswrapper[4832]: I1002 18:21:46.984210 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:46Z","lastTransitionTime":"2025-10-02T18:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.087567 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.087631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.087689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.087722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.087745 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:47Z","lastTransitionTime":"2025-10-02T18:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.190127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.190195 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.190219 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.190247 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.190302 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:47Z","lastTransitionTime":"2025-10-02T18:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.222797 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:47 crc kubenswrapper[4832]: E1002 18:21:47.222954 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.223045 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:47 crc kubenswrapper[4832]: E1002 18:21:47.223326 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.292832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.292874 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.292888 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.292907 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.292920 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:47Z","lastTransitionTime":"2025-10-02T18:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.396136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.396189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.396201 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.396218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.396229 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:47Z","lastTransitionTime":"2025-10-02T18:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.499056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.499098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.499109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.499126 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.499137 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:47Z","lastTransitionTime":"2025-10-02T18:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.601513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.601549 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.601557 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.601570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.601580 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:47Z","lastTransitionTime":"2025-10-02T18:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.703923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.703964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.703973 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.703990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.704001 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:47Z","lastTransitionTime":"2025-10-02T18:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.807217 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.807321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.807340 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.807366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.807383 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:47Z","lastTransitionTime":"2025-10-02T18:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.910618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.910689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.910705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.910735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:47 crc kubenswrapper[4832]: I1002 18:21:47.910754 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:47Z","lastTransitionTime":"2025-10-02T18:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.013447 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.013507 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.013520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.013541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.013554 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:48Z","lastTransitionTime":"2025-10-02T18:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.116697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.116783 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.116802 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.116839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.116856 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:48Z","lastTransitionTime":"2025-10-02T18:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.219717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.219799 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.219835 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.219858 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.219871 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:48Z","lastTransitionTime":"2025-10-02T18:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.222298 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.222325 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:48 crc kubenswrapper[4832]: E1002 18:21:48.222433 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:48 crc kubenswrapper[4832]: E1002 18:21:48.222527 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.322453 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.322505 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.322518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.322537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.322549 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:48Z","lastTransitionTime":"2025-10-02T18:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.424877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.424927 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.424939 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.424955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.424967 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:48Z","lastTransitionTime":"2025-10-02T18:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.528803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.528902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.528930 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.529046 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.529094 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:48Z","lastTransitionTime":"2025-10-02T18:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.631909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.631983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.632000 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.632025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.632046 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:48Z","lastTransitionTime":"2025-10-02T18:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.734844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.734899 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.734916 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.734939 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.734956 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:48Z","lastTransitionTime":"2025-10-02T18:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.837093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.837187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.837205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.837234 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.837250 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:48Z","lastTransitionTime":"2025-10-02T18:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.940561 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.940624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.940641 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.940664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:48 crc kubenswrapper[4832]: I1002 18:21:48.940681 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:48Z","lastTransitionTime":"2025-10-02T18:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.043586 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.043645 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.043663 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.043687 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.043707 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:49Z","lastTransitionTime":"2025-10-02T18:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.146104 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.146159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.146174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.146196 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.146214 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:49Z","lastTransitionTime":"2025-10-02T18:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.222024 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.222095 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:49 crc kubenswrapper[4832]: E1002 18:21:49.222209 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:49 crc kubenswrapper[4832]: E1002 18:21:49.222365 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.249041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.249137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.249151 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.249168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.249180 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:49Z","lastTransitionTime":"2025-10-02T18:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.351800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.351862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.351876 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.351891 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.351905 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:49Z","lastTransitionTime":"2025-10-02T18:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.454442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.454781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.455002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.455107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.455210 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:49Z","lastTransitionTime":"2025-10-02T18:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.558709 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.558780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.558795 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.558818 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.558832 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:49Z","lastTransitionTime":"2025-10-02T18:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.661313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.661347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.661383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.661400 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.661412 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:49Z","lastTransitionTime":"2025-10-02T18:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.763343 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.763417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.763438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.763465 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.763485 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:49Z","lastTransitionTime":"2025-10-02T18:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.869962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.869995 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.870004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.870019 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.870045 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:49Z","lastTransitionTime":"2025-10-02T18:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.973302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.973384 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.973404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.973431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:49 crc kubenswrapper[4832]: I1002 18:21:49.973448 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:49Z","lastTransitionTime":"2025-10-02T18:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.077439 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.077529 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.077576 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.077600 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.077621 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:50Z","lastTransitionTime":"2025-10-02T18:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.181414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.181589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.181611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.181669 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.181768 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:50Z","lastTransitionTime":"2025-10-02T18:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.222237 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.222236 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:50 crc kubenswrapper[4832]: E1002 18:21:50.222486 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:50 crc kubenswrapper[4832]: E1002 18:21:50.222575 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.286631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.286677 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.286716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.286735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.286749 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:50Z","lastTransitionTime":"2025-10-02T18:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.390013 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.390079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.390091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.390114 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.390129 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:50Z","lastTransitionTime":"2025-10-02T18:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.492446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.492829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.492904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.492976 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.493044 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:50Z","lastTransitionTime":"2025-10-02T18:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.595862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.596349 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.596450 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.596555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.596637 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:50Z","lastTransitionTime":"2025-10-02T18:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.699493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.699527 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.699536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.699551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.699561 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:50Z","lastTransitionTime":"2025-10-02T18:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.802703 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.803086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.803160 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.803241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.803412 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:50Z","lastTransitionTime":"2025-10-02T18:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.906362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.906441 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.906473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.906501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:50 crc kubenswrapper[4832]: I1002 18:21:50.906519 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:50Z","lastTransitionTime":"2025-10-02T18:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.010220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.010610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.010673 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.010791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.010861 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:51Z","lastTransitionTime":"2025-10-02T18:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.113116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.113149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.113158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.113175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.113184 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:51Z","lastTransitionTime":"2025-10-02T18:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.215598 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.215667 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.215690 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.215714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.215728 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:51Z","lastTransitionTime":"2025-10-02T18:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.222134 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:51 crc kubenswrapper[4832]: E1002 18:21:51.222246 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.222137 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:51 crc kubenswrapper[4832]: E1002 18:21:51.222490 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.318112 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.318170 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.318179 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.318193 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.318202 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:51Z","lastTransitionTime":"2025-10-02T18:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.421807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.421860 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.421873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.421892 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.421908 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:51Z","lastTransitionTime":"2025-10-02T18:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.524651 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.524690 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.524702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.524718 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.524730 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:51Z","lastTransitionTime":"2025-10-02T18:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.626991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.627422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.627527 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.627624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.627702 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:51Z","lastTransitionTime":"2025-10-02T18:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.731027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.731429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.731620 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.731728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.731805 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:51Z","lastTransitionTime":"2025-10-02T18:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.834881 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.834930 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.834944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.834962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.834973 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:51Z","lastTransitionTime":"2025-10-02T18:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.946444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.946514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.946538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.946569 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:51 crc kubenswrapper[4832]: I1002 18:21:51.946592 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:51Z","lastTransitionTime":"2025-10-02T18:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.049704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.049760 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.049778 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.049804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.049823 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:52Z","lastTransitionTime":"2025-10-02T18:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.153312 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.153358 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.153369 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.153385 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.153396 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:52Z","lastTransitionTime":"2025-10-02T18:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.222336 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.222413 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:52 crc kubenswrapper[4832]: E1002 18:21:52.222523 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:52 crc kubenswrapper[4832]: E1002 18:21:52.222603 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.255628 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.255688 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.255705 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.255728 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.255743 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:52Z","lastTransitionTime":"2025-10-02T18:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.358698 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.358748 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.358762 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.358780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.358791 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:52Z","lastTransitionTime":"2025-10-02T18:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.461724 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.461761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.461770 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.461784 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.461793 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:52Z","lastTransitionTime":"2025-10-02T18:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.565053 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.565130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.565149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.565171 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.565188 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:52Z","lastTransitionTime":"2025-10-02T18:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.668206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.668299 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.668326 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.668352 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.668369 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:52Z","lastTransitionTime":"2025-10-02T18:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.772319 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.772404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.772427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.772457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.772481 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:52Z","lastTransitionTime":"2025-10-02T18:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.878136 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.878231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.878259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.878485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.878533 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:52Z","lastTransitionTime":"2025-10-02T18:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.983110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.983184 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.983203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.983228 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:52 crc kubenswrapper[4832]: I1002 18:21:52.983244 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:52Z","lastTransitionTime":"2025-10-02T18:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.087210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.087729 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.087805 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.087875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.088048 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:53Z","lastTransitionTime":"2025-10-02T18:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.192024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.192073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.192086 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.192107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.192121 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:53Z","lastTransitionTime":"2025-10-02T18:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.221929 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.222115 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:53 crc kubenswrapper[4832]: E1002 18:21:53.222238 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:53 crc kubenswrapper[4832]: E1002 18:21:53.222433 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.295737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.295805 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.295820 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.295851 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.295867 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:53Z","lastTransitionTime":"2025-10-02T18:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.398661 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.398733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.398747 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.398774 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.398790 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:53Z","lastTransitionTime":"2025-10-02T18:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.502128 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.502220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.502256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.502333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.502371 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:53Z","lastTransitionTime":"2025-10-02T18:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.605619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.605700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.605719 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.605744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.605762 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:53Z","lastTransitionTime":"2025-10-02T18:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.709029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.709083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.709101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.709121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.709132 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:53Z","lastTransitionTime":"2025-10-02T18:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.811623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.811674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.811689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.811717 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.811730 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:53Z","lastTransitionTime":"2025-10-02T18:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.913774 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.913815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.913823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.913837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:53 crc kubenswrapper[4832]: I1002 18:21:53.913846 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:53Z","lastTransitionTime":"2025-10-02T18:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.017732 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.017773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.017785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.017804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.017814 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:54Z","lastTransitionTime":"2025-10-02T18:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.120636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.120697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.120715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.120738 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.120757 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:54Z","lastTransitionTime":"2025-10-02T18:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.221759 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.221826 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:54 crc kubenswrapper[4832]: E1002 18:21:54.221883 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:54 crc kubenswrapper[4832]: E1002 18:21:54.221981 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.223061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.223092 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.223101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.223115 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.223126 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:54Z","lastTransitionTime":"2025-10-02T18:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.325163 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.325205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.325213 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.325229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.325239 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:54Z","lastTransitionTime":"2025-10-02T18:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.430845 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.430895 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.430911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.430935 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.430948 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:54Z","lastTransitionTime":"2025-10-02T18:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.534043 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.534100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.534110 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.534126 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.534137 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:54Z","lastTransitionTime":"2025-10-02T18:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.636675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.636730 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.636744 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.636764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.636779 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:54Z","lastTransitionTime":"2025-10-02T18:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.738404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.738435 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.738444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.738457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.738466 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:54Z","lastTransitionTime":"2025-10-02T18:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.840220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.840329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.840355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.840416 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.840438 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:54Z","lastTransitionTime":"2025-10-02T18:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.944028 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.944079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.944093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.944113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:54 crc kubenswrapper[4832]: I1002 18:21:54.944124 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:54Z","lastTransitionTime":"2025-10-02T18:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.047671 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.047721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.047733 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.047752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.047764 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:55Z","lastTransitionTime":"2025-10-02T18:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.151061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.151113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.151124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.151143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.151153 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:55Z","lastTransitionTime":"2025-10-02T18:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.222066 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.222149 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:55 crc kubenswrapper[4832]: E1002 18:21:55.222245 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:55 crc kubenswrapper[4832]: E1002 18:21:55.222379 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.242117 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.254939 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.254987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.254998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.255015 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.255028 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:55Z","lastTransitionTime":"2025-10-02T18:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.257957 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.277465 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.293146 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.307444 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c5105f9-32bb-4e0f-96e4-bee6a87f13aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18857f0c558d210c90f62d12a2fe44432c0e8d56c9a884ef7f8aba75b4b3803b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f43619bd98316172da519f42b12be3b52f40cb038dbf9228b7b5168373c682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf6606019b3529d235c1ace6b9f28f053785daa0770d8553f85a24fddfe15d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.324175 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.339153 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.356200 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.357831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.357870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.357886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.357911 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.357927 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:55Z","lastTransitionTime":"2025-10-02T18:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.370690 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.390805 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.404824 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.416230 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.427110 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.441713 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.452659 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.460918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.461009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.461027 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.461052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.461068 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:55Z","lastTransitionTime":"2025-10-02T18:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.480586 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"message\\\":\\\"d openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1002 18:21:41.181615 6512 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1002 18:21:41.181619 6512 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z]\\\\nI1002 18:21:41.181622 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-m27c2\\\\nI1002 18:21:41.181635 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-fjjsq\\\\nI1002 18:21:41.181642 6512 obj_retr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.494793 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:55Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.564062 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.564116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.564130 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.564147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.564161 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:55Z","lastTransitionTime":"2025-10-02T18:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.666369 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.666605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.666695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.666874 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.667047 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:55Z","lastTransitionTime":"2025-10-02T18:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.769315 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.769377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.769395 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.769419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.769436 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:55Z","lastTransitionTime":"2025-10-02T18:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.871528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.871566 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.871579 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.871596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.871607 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:55Z","lastTransitionTime":"2025-10-02T18:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.974251 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.974319 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.974330 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.974351 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:55 crc kubenswrapper[4832]: I1002 18:21:55.974364 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:55Z","lastTransitionTime":"2025-10-02T18:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.077386 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.077514 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.077537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.077564 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.077576 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:56Z","lastTransitionTime":"2025-10-02T18:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.180949 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.181302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.181493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.181596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.181663 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:56Z","lastTransitionTime":"2025-10-02T18:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.222437 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:56 crc kubenswrapper[4832]: E1002 18:21:56.222608 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.222709 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:56 crc kubenswrapper[4832]: E1002 18:21:56.222920 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.224520 4832 scope.go:117] "RemoveContainer" containerID="869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b" Oct 02 18:21:56 crc kubenswrapper[4832]: E1002 18:21:56.225044 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.284853 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.284906 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.284919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.284939 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.284953 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:56Z","lastTransitionTime":"2025-10-02T18:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.387903 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.387940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.387948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.387962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.387971 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:56Z","lastTransitionTime":"2025-10-02T18:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.491338 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.491397 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.491414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.491438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.491455 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:56Z","lastTransitionTime":"2025-10-02T18:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.595602 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.595647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.595658 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.595676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.595688 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:56Z","lastTransitionTime":"2025-10-02T18:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.697760 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.697856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.697880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.697905 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.697922 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:56Z","lastTransitionTime":"2025-10-02T18:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.801558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.801649 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.801701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.801737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.801760 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:56Z","lastTransitionTime":"2025-10-02T18:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.904635 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.904677 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.904686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.904706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.904717 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:56Z","lastTransitionTime":"2025-10-02T18:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.991468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.991524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.991537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.991554 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:56 crc kubenswrapper[4832]: I1002 18:21:56.991566 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:56Z","lastTransitionTime":"2025-10-02T18:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:57 crc kubenswrapper[4832]: E1002 18:21:57.005865 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:57Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.009978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.010018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.010031 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.010055 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.010068 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:57Z","lastTransitionTime":"2025-10-02T18:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:57 crc kubenswrapper[4832]: E1002 18:21:57.025594 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:57Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.029134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.029166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.029178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.029193 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.029203 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:57Z","lastTransitionTime":"2025-10-02T18:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:57 crc kubenswrapper[4832]: E1002 18:21:57.044642 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:57Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.047952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.047975 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.047984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.047995 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.048005 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:57Z","lastTransitionTime":"2025-10-02T18:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:57 crc kubenswrapper[4832]: E1002 18:21:57.059967 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:57Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.063695 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.063720 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.063735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.063746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.063754 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:57Z","lastTransitionTime":"2025-10-02T18:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:57 crc kubenswrapper[4832]: E1002 18:21:57.077720 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:57Z is after 2025-08-24T17:21:41Z" Oct 02 18:21:57 crc kubenswrapper[4832]: E1002 18:21:57.077831 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.079611 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.079676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.079702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.079731 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.079753 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:57Z","lastTransitionTime":"2025-10-02T18:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.183398 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.183454 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.183471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.183495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.183512 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:57Z","lastTransitionTime":"2025-10-02T18:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.222933 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.222932 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:57 crc kubenswrapper[4832]: E1002 18:21:57.223154 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:57 crc kubenswrapper[4832]: E1002 18:21:57.223312 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.287098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.287168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.287186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.287211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.287228 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:57Z","lastTransitionTime":"2025-10-02T18:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.389713 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.389785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.389803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.389823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.389836 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:57Z","lastTransitionTime":"2025-10-02T18:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.493636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.493714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.493726 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.493748 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.493761 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:57Z","lastTransitionTime":"2025-10-02T18:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.596410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.596449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.596459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.596477 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.596489 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:57Z","lastTransitionTime":"2025-10-02T18:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.699243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.699597 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.699617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.699637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.699650 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:57Z","lastTransitionTime":"2025-10-02T18:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.801854 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.801898 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.801907 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.801923 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.801942 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:57Z","lastTransitionTime":"2025-10-02T18:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.905102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.905187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.905213 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.905246 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:57 crc kubenswrapper[4832]: I1002 18:21:57.905301 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:57Z","lastTransitionTime":"2025-10-02T18:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.008231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.008342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.008366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.008395 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.008413 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:58Z","lastTransitionTime":"2025-10-02T18:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.111505 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.111562 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.111573 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.111591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.111603 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:58Z","lastTransitionTime":"2025-10-02T18:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.214244 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.214351 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.214366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.214390 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.214407 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:58Z","lastTransitionTime":"2025-10-02T18:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.222686 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:58 crc kubenswrapper[4832]: E1002 18:21:58.222945 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.223909 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:21:58 crc kubenswrapper[4832]: E1002 18:21:58.224582 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.317397 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.317449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.317460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.317478 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.317491 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:58Z","lastTransitionTime":"2025-10-02T18:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.420915 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.421803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.421842 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.421877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.421893 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:58Z","lastTransitionTime":"2025-10-02T18:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.524994 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.525063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.525077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.525099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.525112 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:58Z","lastTransitionTime":"2025-10-02T18:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.627971 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.628058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.628088 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.628120 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.628143 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:58Z","lastTransitionTime":"2025-10-02T18:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.731391 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.731444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.731455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.731477 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.731490 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:58Z","lastTransitionTime":"2025-10-02T18:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.834620 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.834700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.834723 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.834751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.834769 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:58Z","lastTransitionTime":"2025-10-02T18:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.937555 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.937607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.937617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.937634 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:58 crc kubenswrapper[4832]: I1002 18:21:58.937646 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:58Z","lastTransitionTime":"2025-10-02T18:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.041054 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.041443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.041607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.041707 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.041792 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:59Z","lastTransitionTime":"2025-10-02T18:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.144785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.144856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.144869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.144894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.144907 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:59Z","lastTransitionTime":"2025-10-02T18:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.222764 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.222957 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:21:59 crc kubenswrapper[4832]: E1002 18:21:59.223120 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:21:59 crc kubenswrapper[4832]: E1002 18:21:59.223302 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.248320 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.248378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.248390 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.248413 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.248427 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:59Z","lastTransitionTime":"2025-10-02T18:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.351434 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.351493 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.351511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.351536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.351552 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:59Z","lastTransitionTime":"2025-10-02T18:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.455444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.455537 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.455562 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.455591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.455611 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:59Z","lastTransitionTime":"2025-10-02T18:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.558089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.558140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.558151 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.558172 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.558184 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:59Z","lastTransitionTime":"2025-10-02T18:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.662140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.662574 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.662656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.662729 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.662786 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:59Z","lastTransitionTime":"2025-10-02T18:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.766347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.766419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.766442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.766577 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.766619 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:59Z","lastTransitionTime":"2025-10-02T18:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.795013 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs\") pod \"network-metrics-daemon-m27c2\" (UID: \"8adcf2d1-6a80-40e8-a94b-627c2b18443f\") " pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:21:59 crc kubenswrapper[4832]: E1002 18:21:59.795300 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:21:59 crc kubenswrapper[4832]: E1002 18:21:59.795382 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs podName:8adcf2d1-6a80-40e8-a94b-627c2b18443f nodeName:}" failed. No retries permitted until 2025-10-02 18:22:31.795357227 +0000 UTC m=+108.764800109 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs") pod "network-metrics-daemon-m27c2" (UID: "8adcf2d1-6a80-40e8-a94b-627c2b18443f") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.870041 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.870106 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.870120 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.870140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.870152 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:59Z","lastTransitionTime":"2025-10-02T18:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.972190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.972484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.972570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.972653 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:21:59 crc kubenswrapper[4832]: I1002 18:21:59.972775 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:21:59Z","lastTransitionTime":"2025-10-02T18:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.075850 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.075916 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.075928 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.075949 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.075961 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:00Z","lastTransitionTime":"2025-10-02T18:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.178750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.178797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.178808 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.178825 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.178837 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:00Z","lastTransitionTime":"2025-10-02T18:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.222408 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.222545 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:00 crc kubenswrapper[4832]: E1002 18:22:00.223024 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:00 crc kubenswrapper[4832]: E1002 18:22:00.223100 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.281945 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.282017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.282042 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.282069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.282086 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:00Z","lastTransitionTime":"2025-10-02T18:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.390341 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.390396 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.390410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.390431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.390445 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:00Z","lastTransitionTime":"2025-10-02T18:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.494075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.494146 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.494159 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.494183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.494200 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:00Z","lastTransitionTime":"2025-10-02T18:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.598499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.598563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.598576 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.598598 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.598616 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:00Z","lastTransitionTime":"2025-10-02T18:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.702360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.702438 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.702456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.702484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.702502 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:00Z","lastTransitionTime":"2025-10-02T18:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.805934 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.805987 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.805999 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.806024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.806037 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:00Z","lastTransitionTime":"2025-10-02T18:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.909372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.909763 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.909830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.909912 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:00 crc kubenswrapper[4832]: I1002 18:22:00.909986 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:00Z","lastTransitionTime":"2025-10-02T18:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.013360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.013411 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.013423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.013442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.013457 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:01Z","lastTransitionTime":"2025-10-02T18:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.118377 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.118437 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.118447 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.118467 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.118480 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:01Z","lastTransitionTime":"2025-10-02T18:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.221479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.221541 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.221556 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.221583 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.221598 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:01Z","lastTransitionTime":"2025-10-02T18:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.221903 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.221978 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:01 crc kubenswrapper[4832]: E1002 18:22:01.222048 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:01 crc kubenswrapper[4832]: E1002 18:22:01.222176 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.324785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.325183 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.326384 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.326479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.326544 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:01Z","lastTransitionTime":"2025-10-02T18:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.429149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.429231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.429244 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.429284 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.429299 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:01Z","lastTransitionTime":"2025-10-02T18:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.532461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.532536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.532547 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.532566 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.532578 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:01Z","lastTransitionTime":"2025-10-02T18:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.636029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.636078 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.636093 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.636111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.636121 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:01Z","lastTransitionTime":"2025-10-02T18:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.738967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.739014 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.739026 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.739047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.739059 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:01Z","lastTransitionTime":"2025-10-02T18:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.842449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.842522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.842534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.842554 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.842566 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:01Z","lastTransitionTime":"2025-10-02T18:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.947038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.947117 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.947132 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.947153 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:01 crc kubenswrapper[4832]: I1002 18:22:01.947168 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:01Z","lastTransitionTime":"2025-10-02T18:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.050838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.050903 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.050926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.050955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.050969 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:02Z","lastTransitionTime":"2025-10-02T18:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.154431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.154492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.154505 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.154521 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.154531 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:02Z","lastTransitionTime":"2025-10-02T18:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.221949 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:02 crc kubenswrapper[4832]: E1002 18:22:02.222208 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.222256 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:02 crc kubenswrapper[4832]: E1002 18:22:02.222791 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.237400 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.258141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.258204 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.258217 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.258236 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.258247 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:02Z","lastTransitionTime":"2025-10-02T18:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.360791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.360856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.360873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.360902 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.360927 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:02Z","lastTransitionTime":"2025-10-02T18:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.464802 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.464875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.464893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.464918 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.464935 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:02Z","lastTransitionTime":"2025-10-02T18:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.568091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.568147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.568163 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.568188 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.568206 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:02Z","lastTransitionTime":"2025-10-02T18:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.671248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.671352 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.671366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.671387 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.671403 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:02Z","lastTransitionTime":"2025-10-02T18:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.747453 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhm4n_7319e265-17de-4801-8ab7-7671dba7489d/kube-multus/0.log" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.747573 4832 generic.go:334] "Generic (PLEG): container finished" podID="7319e265-17de-4801-8ab7-7671dba7489d" containerID="897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db" exitCode=1 Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.747694 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhm4n" event={"ID":"7319e265-17de-4801-8ab7-7671dba7489d","Type":"ContainerDied","Data":"897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db"} Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.748318 4832 scope.go:117] "RemoveContainer" containerID="897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.767159 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:02Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.776229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.776256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.776281 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.776297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.776307 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:02Z","lastTransitionTime":"2025-10-02T18:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.783147 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:02Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.797173 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:02Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.810645 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:02Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.827939 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:02Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.853359 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"message\\\":\\\"d openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1002 18:21:41.181615 6512 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1002 18:21:41.181619 6512 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z]\\\\nI1002 18:21:41.181622 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-m27c2\\\\nI1002 18:21:41.181635 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-fjjsq\\\\nI1002 18:21:41.181642 6512 obj_retr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:02Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.872614 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:02Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.879564 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.879613 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.879626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.879646 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.879664 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:02Z","lastTransitionTime":"2025-10-02T18:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.891354 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:02Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.907093 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:02Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.926206 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:02Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.950151 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:22:02Z\\\",\\\"message\\\":\\\"2025-10-02T18:21:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1fe9669-6747-4f8b-bfde-f1d556480d93\\\\n2025-10-02T18:21:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1fe9669-6747-4f8b-bfde-f1d556480d93 to /host/opt/cni/bin/\\\\n2025-10-02T18:21:16Z [verbose] multus-daemon started\\\\n2025-10-02T18:21:16Z [verbose] Readiness Indicator file check\\\\n2025-10-02T18:22:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:02Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.966509 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c5105f9-32bb-4e0f-96e4-bee6a87f13aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18857f0c558d210c90f62d12a2fe44432c0e8d56c9a884ef7f8aba75b4b3803b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f43619bd98316172da519f42b12be3b52f40cb038dbf9228b7b5168373c682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf6606019b3529d235c1ace6b9f28f053785daa0770d8553f85a24fddfe15d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:02Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.980053 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c88e7d96-0b8a-4102-937d-bff61c3c53cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb52e75f1f63ead7516b0cd983bf9fb364cc0f67336eb59513bdfddcb7f803d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f87f754a90e154d34aa82089dec8b9490b1652ddd3c6e79a2e6e89efa5667b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f87f754a90e154d34aa82089dec8b9490b1652ddd3c6e79a2e6e89efa5667b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:02Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.981991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.982030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.982042 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.982061 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:02 crc kubenswrapper[4832]: I1002 18:22:02.982074 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:02Z","lastTransitionTime":"2025-10-02T18:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.000596 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:02Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.013361 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.025886 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.038784 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.051080 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.085455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.085922 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.085947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.085970 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.085994 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:03Z","lastTransitionTime":"2025-10-02T18:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.189052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.189108 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.189124 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.189142 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.189154 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:03Z","lastTransitionTime":"2025-10-02T18:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.222065 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:03 crc kubenswrapper[4832]: E1002 18:22:03.222288 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.222877 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:03 crc kubenswrapper[4832]: E1002 18:22:03.223080 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.291807 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.291886 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.291913 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.291948 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.291973 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:03Z","lastTransitionTime":"2025-10-02T18:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.395060 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.395121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.395143 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.395171 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.395193 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:03Z","lastTransitionTime":"2025-10-02T18:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.497716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.497776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.497787 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.497804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.497815 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:03Z","lastTransitionTime":"2025-10-02T18:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.601404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.601474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.601492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.601518 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.601534 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:03Z","lastTransitionTime":"2025-10-02T18:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.704462 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.704506 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.704516 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.704531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.704542 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:03Z","lastTransitionTime":"2025-10-02T18:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.754586 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhm4n_7319e265-17de-4801-8ab7-7671dba7489d/kube-multus/0.log" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.754703 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhm4n" event={"ID":"7319e265-17de-4801-8ab7-7671dba7489d","Type":"ContainerStarted","Data":"fb1426b011d18013e2707b04e8f6d79821c592635976c3a58c7ff94c0f2135c3"} Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.772108 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.787954 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.805116 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.807347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.807393 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.807407 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.807425 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.807438 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:03Z","lastTransitionTime":"2025-10-02T18:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.825961 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.855416 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"message\\\":\\\"d openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1002 18:21:41.181615 6512 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1002 18:21:41.181619 6512 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z]\\\\nI1002 18:21:41.181622 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-m27c2\\\\nI1002 18:21:41.181635 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-fjjsq\\\\nI1002 18:21:41.181642 6512 obj_retr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.872943 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.898946 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.910607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.910677 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.910696 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.910725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.910743 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:03Z","lastTransitionTime":"2025-10-02T18:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.918080 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1426b011d18013e2707b04e8f6d79821c592635976c3a58c7ff94c0f2135c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:22:02Z\\\",\\\"message\\\":\\\"2025-10-02T18:21:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1fe9669-6747-4f8b-bfde-f1d556480d93\\\\n2025-10-02T18:21:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1fe9669-6747-4f8b-bfde-f1d556480d93 to /host/opt/cni/bin/\\\\n2025-10-02T18:21:16Z [verbose] multus-daemon started\\\\n2025-10-02T18:21:16Z [verbose] Readiness Indicator file check\\\\n2025-10-02T18:22:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.934409 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c5105f9-32bb-4e0f-96e4-bee6a87f13aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18857f0c558d210c90f62d12a2fe44432c0e8d56c9a884ef7f8aba75b4b3803b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f43619bd98316172da519f42b12be3b52f40cb038dbf9228b7b5168373c682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf6606019b3529d235c1ace6b9f28f053785daa0770d8553f85a24fddfe15d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.951986 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c88e7d96-0b8a-4102-937d-bff61c3c53cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb52e75f1f63ead7516b0cd983bf9fb364cc0f67336eb59513bdfddcb7f803d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f87f754a90e154d34aa82089dec8b9490b1652ddd3c6e79a2e6e89efa5667b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f87f754a90e154d34aa82089dec8b9490b1652ddd3c6e79a2e6e89efa5667b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.972103 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:03 crc kubenswrapper[4832]: I1002 18:22:03.994547 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:03Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.014410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.014471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.014481 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.014503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.014515 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:04Z","lastTransitionTime":"2025-10-02T18:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.017153 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:04Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.035746 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:04Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.051821 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:04Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.074499 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:04Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.148998 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.149087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.149113 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.149149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.149178 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:04Z","lastTransitionTime":"2025-10-02T18:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.149804 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:04Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.164919 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:04Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.222527 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.222643 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:04 crc kubenswrapper[4832]: E1002 18:22:04.222730 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:04 crc kubenswrapper[4832]: E1002 18:22:04.222834 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.252466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.252548 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.252573 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.252601 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.252622 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:04Z","lastTransitionTime":"2025-10-02T18:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.355231 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.355288 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.355298 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.355315 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.355328 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:04Z","lastTransitionTime":"2025-10-02T18:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.458152 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.458213 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.458225 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.458249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.458278 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:04Z","lastTransitionTime":"2025-10-02T18:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.561367 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.561422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.561436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.561454 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.561476 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:04Z","lastTransitionTime":"2025-10-02T18:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.667969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.668070 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.668088 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.668131 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.668147 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:04Z","lastTransitionTime":"2025-10-02T18:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.770501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.770553 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.770566 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.770584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.770598 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:04Z","lastTransitionTime":"2025-10-02T18:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.873901 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.873952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.873961 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.873978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.873988 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:04Z","lastTransitionTime":"2025-10-02T18:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.976557 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.976605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.976617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.976632 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:04 crc kubenswrapper[4832]: I1002 18:22:04.976643 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:04Z","lastTransitionTime":"2025-10-02T18:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.080022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.080358 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.080409 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.080442 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.080464 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:05Z","lastTransitionTime":"2025-10-02T18:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.183355 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.183417 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.183436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.183461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.183477 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:05Z","lastTransitionTime":"2025-10-02T18:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.222835 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.223095 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:05 crc kubenswrapper[4832]: E1002 18:22:05.223211 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:05 crc kubenswrapper[4832]: E1002 18:22:05.224189 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.241853 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.256120 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.272781 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.286581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.286697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.286715 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.286742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.286763 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:05Z","lastTransitionTime":"2025-10-02T18:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.288239 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.304724 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.336227 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"message\\\":\\\"d openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1002 18:21:41.181615 6512 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1002 18:21:41.181619 6512 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z]\\\\nI1002 18:21:41.181622 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-m27c2\\\\nI1002 18:21:41.181635 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-fjjsq\\\\nI1002 18:21:41.181642 6512 obj_retr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.352386 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.371665 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.391038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.391116 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.391140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.391173 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.391198 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:05Z","lastTransitionTime":"2025-10-02T18:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.392119 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.416013 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.443304 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1426b011d18013e2707b04e8f6d79821c592635976c3a58c7ff94c0f2135c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:22:02Z\\\",\\\"message\\\":\\\"2025-10-02T18:21:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1fe9669-6747-4f8b-bfde-f1d556480d93\\\\n2025-10-02T18:21:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1fe9669-6747-4f8b-bfde-f1d556480d93 to /host/opt/cni/bin/\\\\n2025-10-02T18:21:16Z [verbose] multus-daemon started\\\\n2025-10-02T18:21:16Z [verbose] Readiness Indicator file check\\\\n2025-10-02T18:22:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.459524 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c5105f9-32bb-4e0f-96e4-bee6a87f13aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18857f0c558d210c90f62d12a2fe44432c0e8d56c9a884ef7f8aba75b4b3803b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f43619bd98316172da519f42b12be3b52f40cb038dbf9228b7b5168373c682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf6606019b3529d235c1ace6b9f28f053785daa0770d8553f85a24fddfe15d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.475352 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c88e7d96-0b8a-4102-937d-bff61c3c53cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb52e75f1f63ead7516b0cd983bf9fb364cc0f67336eb59513bdfddcb7f803d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f87f754a90e154d34aa82089dec8b9490b1652ddd3c6e79a2e6e89efa5667b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f87f754a90e154d34aa82089dec8b9490b1652ddd3c6e79a2e6e89efa5667b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.493394 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.493867 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.493932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.493949 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.493978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.493998 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:05Z","lastTransitionTime":"2025-10-02T18:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.509946 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.529777 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.549973 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.569008 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:05Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.597772 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.597842 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.597869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.597898 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.597919 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:05Z","lastTransitionTime":"2025-10-02T18:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.701286 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.701347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.701365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.701393 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.701408 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:05Z","lastTransitionTime":"2025-10-02T18:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.804382 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.804434 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.804449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.804471 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.804486 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:05Z","lastTransitionTime":"2025-10-02T18:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.908118 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.908166 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.908174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.908193 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:05 crc kubenswrapper[4832]: I1002 18:22:05.908204 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:05Z","lastTransitionTime":"2025-10-02T18:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.011630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.011691 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.011704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.011725 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.011737 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:06Z","lastTransitionTime":"2025-10-02T18:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.115005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.115187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.115208 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.115232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.115249 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:06Z","lastTransitionTime":"2025-10-02T18:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.218356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.218402 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.218416 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.218434 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.218448 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:06Z","lastTransitionTime":"2025-10-02T18:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.222593 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:06 crc kubenswrapper[4832]: E1002 18:22:06.222713 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.222595 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:06 crc kubenswrapper[4832]: E1002 18:22:06.222808 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.321746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.321779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.321787 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.321801 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.321811 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:06Z","lastTransitionTime":"2025-10-02T18:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.424427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.425308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.425383 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.425492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.425563 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:06Z","lastTransitionTime":"2025-10-02T18:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.528692 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.528766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.528787 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.528816 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.528837 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:06Z","lastTransitionTime":"2025-10-02T18:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.631831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.631908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.631931 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.631958 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.631976 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:06Z","lastTransitionTime":"2025-10-02T18:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.734654 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.734711 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.734726 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.734743 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.734768 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:06Z","lastTransitionTime":"2025-10-02T18:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.837686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.837764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.837788 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.837815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.837832 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:06Z","lastTransitionTime":"2025-10-02T18:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.941431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.941822 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.941893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.941971 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:06 crc kubenswrapper[4832]: I1002 18:22:06.942053 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:06Z","lastTransitionTime":"2025-10-02T18:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.046058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.046575 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.046993 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.047225 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.047505 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:07Z","lastTransitionTime":"2025-10-02T18:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.102056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.102121 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.102139 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.102168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.102187 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:07Z","lastTransitionTime":"2025-10-02T18:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:07 crc kubenswrapper[4832]: E1002 18:22:07.123918 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:07Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.129882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.129960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.129988 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.130022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.130046 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:07Z","lastTransitionTime":"2025-10-02T18:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:07 crc kubenswrapper[4832]: E1002 18:22:07.152769 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:07Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.158487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.158773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.158980 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.159293 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.159463 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:07Z","lastTransitionTime":"2025-10-02T18:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:07 crc kubenswrapper[4832]: E1002 18:22:07.177198 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:07Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.183255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.183379 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.183403 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.183430 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.183449 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:07Z","lastTransitionTime":"2025-10-02T18:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:07 crc kubenswrapper[4832]: E1002 18:22:07.202798 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:07Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.208936 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.208991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.209004 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.209036 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.209053 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:07Z","lastTransitionTime":"2025-10-02T18:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.222760 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.222836 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:07 crc kubenswrapper[4832]: E1002 18:22:07.223009 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:07 crc kubenswrapper[4832]: E1002 18:22:07.223133 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:07 crc kubenswrapper[4832]: E1002 18:22:07.227798 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:07Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:07 crc kubenswrapper[4832]: E1002 18:22:07.228023 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.230466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.230525 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.230543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.230566 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.230583 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:07Z","lastTransitionTime":"2025-10-02T18:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.334178 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.334218 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.334233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.334252 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.334290 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:07Z","lastTransitionTime":"2025-10-02T18:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.436605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.436657 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.436675 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.436699 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.436715 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:07Z","lastTransitionTime":"2025-10-02T18:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.539873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.539935 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.539953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.539981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.539999 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:07Z","lastTransitionTime":"2025-10-02T18:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.647091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.647420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.647507 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.647576 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.647641 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:07Z","lastTransitionTime":"2025-10-02T18:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.750394 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.750456 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.750475 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.750501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.750518 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:07Z","lastTransitionTime":"2025-10-02T18:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.853617 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.853661 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.853678 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.853700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.853716 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:07Z","lastTransitionTime":"2025-10-02T18:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.956900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.956953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.956969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.956990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:07 crc kubenswrapper[4832]: I1002 18:22:07.957006 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:07Z","lastTransitionTime":"2025-10-02T18:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.059933 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.059997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.060011 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.060030 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.060042 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:08Z","lastTransitionTime":"2025-10-02T18:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.163811 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.163875 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.163894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.163919 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.163939 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:08Z","lastTransitionTime":"2025-10-02T18:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.222305 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.222342 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:08 crc kubenswrapper[4832]: E1002 18:22:08.222478 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:08 crc kubenswrapper[4832]: E1002 18:22:08.223326 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.224103 4832 scope.go:117] "RemoveContainer" containerID="869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.266761 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.266839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.266857 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.266883 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.266902 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:08Z","lastTransitionTime":"2025-10-02T18:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.370559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.370624 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.370644 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.370671 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.370691 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:08Z","lastTransitionTime":"2025-10-02T18:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.478174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.478223 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.478235 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.478255 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.478292 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:08Z","lastTransitionTime":"2025-10-02T18:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.581487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.581565 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.581591 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.581623 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.581647 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:08Z","lastTransitionTime":"2025-10-02T18:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.686393 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.686458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.686476 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.686501 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.686519 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:08Z","lastTransitionTime":"2025-10-02T18:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.775714 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/2.log" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.779125 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerStarted","Data":"3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb"} Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.790047 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.790134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.790162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.790191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.790210 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:08Z","lastTransitionTime":"2025-10-02T18:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.892833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.893177 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.893189 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.893206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.893219 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:08Z","lastTransitionTime":"2025-10-02T18:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.996302 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.996385 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.996409 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.996439 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:08 crc kubenswrapper[4832]: I1002 18:22:08.996463 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:08Z","lastTransitionTime":"2025-10-02T18:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.099029 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.099079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.099096 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.099120 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.099138 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:09Z","lastTransitionTime":"2025-10-02T18:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.201657 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.201704 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.201716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.201735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.201748 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:09Z","lastTransitionTime":"2025-10-02T18:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.222202 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.222210 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:09 crc kubenswrapper[4832]: E1002 18:22:09.222376 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:09 crc kubenswrapper[4832]: E1002 18:22:09.222449 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.303746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.303797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.303808 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.303828 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.303839 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:09Z","lastTransitionTime":"2025-10-02T18:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.406700 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.406779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.406797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.406824 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.406842 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:09Z","lastTransitionTime":"2025-10-02T18:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.509394 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.509484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.509511 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.509539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.509559 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:09Z","lastTransitionTime":"2025-10-02T18:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.612631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.612693 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.612710 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.612734 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.612754 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:09Z","lastTransitionTime":"2025-10-02T18:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.715422 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.715469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.715485 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.715504 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.715517 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:09Z","lastTransitionTime":"2025-10-02T18:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.788090 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/3.log" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.789022 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/2.log" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.792509 4832 generic.go:334] "Generic (PLEG): container finished" podID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerID="3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb" exitCode=1 Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.792552 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerDied","Data":"3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb"} Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.792588 4832 scope.go:117] "RemoveContainer" containerID="869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.793304 4832 scope.go:117] "RemoveContainer" containerID="3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb" Oct 02 18:22:09 crc kubenswrapper[4832]: E1002 18:22:09.793445 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.818689 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.818747 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.818764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.818791 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.818807 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:09Z","lastTransitionTime":"2025-10-02T18:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.820145 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.838042 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://869db1153cade8a0dcd922010c74f86567a738c04590927d7bf19e5e264cb24b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"message\\\":\\\"d openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI1002 18:21:41.181615 6512 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nF1002 18:21:41.181619 6512 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:21:41Z is after 2025-08-24T17:21:41Z]\\\\nI1002 18:21:41.181622 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-m27c2\\\\nI1002 18:21:41.181635 6512 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-fjjsq\\\\nI1002 18:21:41.181642 6512 obj_retr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:22:09Z\\\",\\\"message\\\":\\\"me 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z]\\\\nI1002 18:22:09.616661 6913 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 671.011µs\\\\nI1002 18:22:09.616651 6913 services_controller.go:453] Built service openshift-ingress-operator/metrics template LB for network=default: []services.LB{}\\\\nI1002 18:22:09.616673 6913 services_controller.go:454] Service openshift-ingress-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1002 18:22:09.616677 6913 services_controller.go:356] Processing sync for service openshift-dns-operator/metrics for network=default\\\\nI1002 18:22:09.616684 6913 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d8772e82-b0a4-4596-87d3-3d517c13344b\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]stri\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.847541 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.859201 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.874143 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.885875 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1426b011d18013e2707b04e8f6d79821c592635976c3a58c7ff94c0f2135c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:22:02Z\\\",\\\"message\\\":\\\"2025-10-02T18:21:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1fe9669-6747-4f8b-bfde-f1d556480d93\\\\n2025-10-02T18:21:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1fe9669-6747-4f8b-bfde-f1d556480d93 to /host/opt/cni/bin/\\\\n2025-10-02T18:21:16Z [verbose] multus-daemon started\\\\n2025-10-02T18:21:16Z [verbose] Readiness Indicator file check\\\\n2025-10-02T18:22:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.898038 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c5105f9-32bb-4e0f-96e4-bee6a87f13aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18857f0c558d210c90f62d12a2fe44432c0e8d56c9a884ef7f8aba75b4b3803b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f43619bd98316172da519f42b12be3b52f40cb038dbf9228b7b5168373c682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf6606019b3529d235c1ace6b9f28f053785daa0770d8553f85a24fddfe15d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.908458 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c88e7d96-0b8a-4102-937d-bff61c3c53cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb52e75f1f63ead7516b0cd983bf9fb364cc0f67336eb59513bdfddcb7f803d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f87f754a90e154d34aa82089dec8b9490b1652ddd3c6e79a2e6e89efa5667b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f87f754a90e154d34aa82089dec8b9490b1652ddd3c6e79a2e6e89efa5667b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.922714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.922756 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.922766 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.922784 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.922794 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:09Z","lastTransitionTime":"2025-10-02T18:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.924633 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.939384 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.954818 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.970348 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:09 crc kubenswrapper[4832]: I1002 18:22:09.988184 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.006393 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.020140 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.026098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.026182 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.026197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.026239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.026255 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:10Z","lastTransitionTime":"2025-10-02T18:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.035141 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.051008 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.065812 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.129751 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.129802 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.129816 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.129833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.129845 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:10Z","lastTransitionTime":"2025-10-02T18:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.222219 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.222348 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:10 crc kubenswrapper[4832]: E1002 18:22:10.222458 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:10 crc kubenswrapper[4832]: E1002 18:22:10.222705 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.232362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.232413 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.232431 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.232451 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.232464 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:10Z","lastTransitionTime":"2025-10-02T18:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.337003 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.337075 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.337092 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.337119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.337137 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:10Z","lastTransitionTime":"2025-10-02T18:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.440587 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.440714 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.440788 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.440819 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.440839 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:10Z","lastTransitionTime":"2025-10-02T18:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.544184 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.544229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.544241 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.544274 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.544287 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:10Z","lastTransitionTime":"2025-10-02T18:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.646913 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.646972 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.646984 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.647005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.647018 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:10Z","lastTransitionTime":"2025-10-02T18:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.750333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.750389 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.750404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.750423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.750437 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:10Z","lastTransitionTime":"2025-10-02T18:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.798015 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/3.log" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.801072 4832 scope.go:117] "RemoveContainer" containerID="3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb" Oct 02 18:22:10 crc kubenswrapper[4832]: E1002 18:22:10.801279 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.817303 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c88e7d96-0b8a-4102-937d-bff61c3c53cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb52e75f1f63ead7516b0cd983bf9fb364cc0f67336eb59513bdfddcb7f803d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f87f754a90e154d34aa82089dec8b9490b1652ddd3c6e79a2e6e89efa5667b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f87f754a90e154d34aa82089dec8b9490b1652ddd3c6e79a2e6e89efa5667b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.831393 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.847494 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.852454 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.852516 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.852526 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.852563 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.852579 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:10Z","lastTransitionTime":"2025-10-02T18:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.862107 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.874827 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.892374 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.907024 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1426b011d18013e2707b04e8f6d79821c592635976c3a58c7ff94c0f2135c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:22:02Z\\\",\\\"message\\\":\\\"2025-10-02T18:21:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1fe9669-6747-4f8b-bfde-f1d556480d93\\\\n2025-10-02T18:21:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1fe9669-6747-4f8b-bfde-f1d556480d93 to /host/opt/cni/bin/\\\\n2025-10-02T18:21:16Z [verbose] multus-daemon started\\\\n2025-10-02T18:21:16Z [verbose] Readiness Indicator file check\\\\n2025-10-02T18:22:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.920249 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c5105f9-32bb-4e0f-96e4-bee6a87f13aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18857f0c558d210c90f62d12a2fe44432c0e8d56c9a884ef7f8aba75b4b3803b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f43619bd98316172da519f42b12be3b52f40cb038dbf9228b7b5168373c682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf6606019b3529d235c1ace6b9f28f053785daa0770d8553f85a24fddfe15d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.932376 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.946220 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.954685 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.954737 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.954749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.954767 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.954779 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:10Z","lastTransitionTime":"2025-10-02T18:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.958961 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.972731 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:10 crc kubenswrapper[4832]: I1002 18:22:10.986314 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.000665 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:10Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.013233 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:11Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.026364 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:11Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.047515 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:22:09Z\\\",\\\"message\\\":\\\"me 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z]\\\\nI1002 18:22:09.616661 6913 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 671.011µs\\\\nI1002 18:22:09.616651 6913 services_controller.go:453] Built service openshift-ingress-operator/metrics template LB for network=default: []services.LB{}\\\\nI1002 18:22:09.616673 6913 services_controller.go:454] Service openshift-ingress-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1002 18:22:09.616677 6913 services_controller.go:356] Processing sync for service openshift-dns-operator/metrics for network=default\\\\nI1002 18:22:09.616684 6913 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d8772e82-b0a4-4596-87d3-3d517c13344b\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]stri\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:22:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:11Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.057798 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.057843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.057855 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.057870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.057896 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:11Z","lastTransitionTime":"2025-10-02T18:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.059751 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:11Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.160469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.160528 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.160539 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.160556 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.160567 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:11Z","lastTransitionTime":"2025-10-02T18:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.222224 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.222316 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:11 crc kubenswrapper[4832]: E1002 18:22:11.222427 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:11 crc kubenswrapper[4832]: E1002 18:22:11.222672 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.263838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.263884 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.263894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.263913 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.263926 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:11Z","lastTransitionTime":"2025-10-02T18:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.366829 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.366869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.366877 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.366893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.366901 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:11Z","lastTransitionTime":"2025-10-02T18:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.470284 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.470331 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.470342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.470360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.470372 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:11Z","lastTransitionTime":"2025-10-02T18:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.573711 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.573764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.573781 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.573802 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.573816 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:11Z","lastTransitionTime":"2025-10-02T18:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.676492 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.676621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.676647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.676666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.676680 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:11Z","lastTransitionTime":"2025-10-02T18:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.779969 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.780085 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.780111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.780140 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.780160 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:11Z","lastTransitionTime":"2025-10-02T18:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.883278 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.883329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.883342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.883362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.883375 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:11Z","lastTransitionTime":"2025-10-02T18:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.986344 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.986395 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.986405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.986423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:11 crc kubenswrapper[4832]: I1002 18:22:11.986435 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:11Z","lastTransitionTime":"2025-10-02T18:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.089210 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.089248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.089256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.089289 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.089305 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:12Z","lastTransitionTime":"2025-10-02T18:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.191958 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.192039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.192063 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.192091 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.192112 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:12Z","lastTransitionTime":"2025-10-02T18:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.222034 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.222041 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:12 crc kubenswrapper[4832]: E1002 18:22:12.222300 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:12 crc kubenswrapper[4832]: E1002 18:22:12.222375 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.294706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.294773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.294792 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.294817 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.294838 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:12Z","lastTransitionTime":"2025-10-02T18:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.397334 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.397370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.397378 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.397392 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.397401 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:12Z","lastTransitionTime":"2025-10-02T18:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.500088 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.500142 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.500152 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.500167 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.500178 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:12Z","lastTransitionTime":"2025-10-02T18:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.603297 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.603352 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.603370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.603398 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.603412 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:12Z","lastTransitionTime":"2025-10-02T18:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.706237 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.706296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.706306 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.706343 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.706353 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:12Z","lastTransitionTime":"2025-10-02T18:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.808716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.808777 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.808796 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.808821 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.808839 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:12Z","lastTransitionTime":"2025-10-02T18:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.912025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.912114 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.912135 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.912160 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:12 crc kubenswrapper[4832]: I1002 18:22:12.912180 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:12Z","lastTransitionTime":"2025-10-02T18:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.015787 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.015860 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.015878 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.015904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.015922 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:13Z","lastTransitionTime":"2025-10-02T18:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.118368 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.118429 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.118447 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.118470 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.118487 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:13Z","lastTransitionTime":"2025-10-02T18:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.144324 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.144499 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.144547 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.144576 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.144601 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.144753 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.144775 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.144789 4832 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.144847 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-02 18:23:17.144826423 +0000 UTC m=+154.114269295 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.145057 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:17.145047669 +0000 UTC m=+154.114490551 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.145118 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.145135 4832 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.145146 4832 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.145174 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-02 18:23:17.145165313 +0000 UTC m=+154.114608185 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.145349 4832 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.145383 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:23:17.14537385 +0000 UTC m=+154.114816722 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.145534 4832 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.145567 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-02 18:23:17.145558025 +0000 UTC m=+154.115000897 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.220589 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.220629 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.220637 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.220651 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.220662 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:13Z","lastTransitionTime":"2025-10-02T18:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.221783 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.221838 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.221885 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.221975 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.323134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.323214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.323226 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.323243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.323255 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:13Z","lastTransitionTime":"2025-10-02T18:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.426147 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.426214 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.426232 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.426259 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.426311 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:13Z","lastTransitionTime":"2025-10-02T18:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.529119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.529177 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.529190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.529211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.529223 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:13Z","lastTransitionTime":"2025-10-02T18:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.537716 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.538964 4832 scope.go:117] "RemoveContainer" containerID="3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb" Oct 02 18:22:13 crc kubenswrapper[4832]: E1002 18:22:13.539171 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.632810 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.632868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.632882 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.632904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.632916 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:13Z","lastTransitionTime":"2025-10-02T18:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.735626 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.735674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.735686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.735702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.735714 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:13Z","lastTransitionTime":"2025-10-02T18:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.838395 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.838450 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.838462 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.838482 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.838496 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:13Z","lastTransitionTime":"2025-10-02T18:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.941546 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.941585 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.941596 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.941612 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:13 crc kubenswrapper[4832]: I1002 18:22:13.941626 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:13Z","lastTransitionTime":"2025-10-02T18:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.045402 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.045443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.045452 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.045468 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.045478 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:14Z","lastTransitionTime":"2025-10-02T18:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.148538 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.148584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.148593 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.148609 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.148621 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:14Z","lastTransitionTime":"2025-10-02T18:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.222441 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.222507 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:14 crc kubenswrapper[4832]: E1002 18:22:14.222659 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:14 crc kubenswrapper[4832]: E1002 18:22:14.222888 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.252440 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.252497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.252510 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.252531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.252543 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:14Z","lastTransitionTime":"2025-10-02T18:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.354920 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.354962 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.354974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.354991 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.355003 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:14Z","lastTransitionTime":"2025-10-02T18:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.458574 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.458621 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.458632 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.458650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.458662 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:14Z","lastTransitionTime":"2025-10-02T18:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.561079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.561123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.561134 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.561160 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.561173 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:14Z","lastTransitionTime":"2025-10-02T18:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.663900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.663970 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.663993 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.664025 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.664047 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:14Z","lastTransitionTime":"2025-10-02T18:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.767513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.767597 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.767622 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.767656 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.767681 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:14Z","lastTransitionTime":"2025-10-02T18:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.870009 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.870077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.870094 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.870119 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.870136 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:14Z","lastTransitionTime":"2025-10-02T18:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.972642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.972706 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.972724 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.972750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:14 crc kubenswrapper[4832]: I1002 18:22:14.972771 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:14Z","lastTransitionTime":"2025-10-02T18:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.077967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.078101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.078127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.078158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.078181 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:15Z","lastTransitionTime":"2025-10-02T18:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.180990 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.181040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.181058 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.181087 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.181104 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:15Z","lastTransitionTime":"2025-10-02T18:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.222357 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:15 crc kubenswrapper[4832]: E1002 18:22:15.222568 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.222638 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:15 crc kubenswrapper[4832]: E1002 18:22:15.222825 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.246160 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6425324e129d794e59180144b3823c286a65e4fd0ead515739d402b3f941c87e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8808d438727cb39621de0489482d1837108278f3bc70c6806a5629f100a9d57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.267875 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28e6c98b-e4b6-4027-8cf5-655985e80fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:22:09Z\\\",\\\"message\\\":\\\"me 2025-10-02T18:22:09Z is after 2025-08-24T17:21:41Z]\\\\nI1002 18:22:09.616661 6913 services_controller.go:360] Finished syncing service check-endpoints on namespace openshift-apiserver for network=default : 671.011µs\\\\nI1002 18:22:09.616651 6913 services_controller.go:453] Built service openshift-ingress-operator/metrics template LB for network=default: []services.LB{}\\\\nI1002 18:22:09.616673 6913 services_controller.go:454] Service openshift-ingress-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1002 18:22:09.616677 6913 services_controller.go:356] Processing sync for service openshift-dns-operator/metrics for network=default\\\\nI1002 18:22:09.616684 6913 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-ingress-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"d8772e82-b0a4-4596-87d3-3d517c13344b\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]stri\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:22:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rdj6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9sz9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.283556 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.283619 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.283636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.283701 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.283721 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:15Z","lastTransitionTime":"2025-10-02T18:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.286086 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qdd5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1636af9-732e-45d1-bb4f-2525340a0ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3e3a4e511fb6d098458f3b7fab6e7fedd0ae022b8aace018704793c83e40819\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwkn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qdd5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.300902 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c5105f9-32bb-4e0f-96e4-bee6a87f13aa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18857f0c558d210c90f62d12a2fe44432c0e8d56c9a884ef7f8aba75b4b3803b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65f43619bd98316172da519f42b12be3b52f40cb038dbf9228b7b5168373c682\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf6606019b3529d235c1ace6b9f28f053785daa0770d8553f85a24fddfe15d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://308b532dc225b46a20fade57bd7f23f135602c591c987e1f892226eabe5f7751\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.318060 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c88e7d96-0b8a-4102-937d-bff61c3c53cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6fb52e75f1f63ead7516b0cd983bf9fb364cc0f67336eb59513bdfddcb7f803d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f87f754a90e154d34aa82089dec8b9490b1652ddd3c6e79a2e6e89efa5667b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f87f754a90e154d34aa82089dec8b9490b1652ddd3c6e79a2e6e89efa5667b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.338458 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc046a28-4431-4c23-80e4-a46ff6ea4d0c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efe73ba59e7c9cda7d8b6ba54dc8bb0b815f006bb4cf26eca6769e7ccdf27edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab782e798489895326e4d23c200a2f9bdabd47109d843430354a12ccef62bcca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://43b1d3c4b3f67203ab96a53073f208b93cd54ac3f5524ce28a1ec4c17bebecd2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8d7a669f787359785e5d12718b5c506a6483d009647697ca427eca38821c708\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08092ee322bef87eba2ee62c377f44e5fadcf469190b25428cba3f449fce9263\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-02T18:21:08Z\\\",\\\"message\\\":\\\"amespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1002 18:20:54.996195 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1002 18:20:54.997098 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-15898746/tls.crt::/tmp/serving-cert-15898746/tls.key\\\\\\\"\\\\nI1002 18:21:08.884445 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1002 18:21:08.887528 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1002 18:21:08.887557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1002 18:21:08.887584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1002 18:21:08.887590 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1002 18:21:08.897634 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1002 18:21:08.897776 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897811 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1002 18:21:08.897845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1002 18:21:08.897878 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI1002 18:21:08.897657 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1002 18:21:08.898613 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1002 18:21:08.898637 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1002 18:21:08.899737 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:54Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2801d255a33423dbec02e80b9e595c14d0e373280c6ce0b5fb47a8d8e143fbfe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:52Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fce2f16b8fbfcac3694895c406c0d4c445e39c664c77d1d2b747ee5527c69152\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:20:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.356235 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf1b02daf225470b5928b9ef96c1dc4350f6f16e6a377b18f56b590502746460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.371521 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.387313 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.387630 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.387900 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.388077 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.388242 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:15Z","lastTransitionTime":"2025-10-02T18:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.392571 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.414106 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec0c328b-b145-450a-aace-06bb839a1a02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab136faabaeaa6540b24a7c59518597e286b60ef787a15fdf52c23b1565e72e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ad579c2becb43816487deffd4005b3db3d5751630ee94a4b09dd9eeae97f904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd5ed8c30f53d334e191309fae09b97955ad40522477a739539094c0a29e4f83\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f89205cc25c8a22b5367b0b4c9d2f7eb6a08476a0d58ec6c6744864a6b3b3769\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b6eaa522e1af03df624a91ee4185231486e9c3da858ce5dff634e8a489873a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a5a443e4de50e2dc9ae1277a74429dd76727b4da587b542cd1493c3c80d9ab5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1285da30011e46d33151ae55a44cdfa799d946ee779999bfb1c520469e43491a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-02T18:21:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m4d4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zqjdg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.443826 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhm4n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7319e265-17de-4801-8ab7-7671dba7489d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb1426b011d18013e2707b04e8f6d79821c592635976c3a58c7ff94c0f2135c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-02T18:22:02Z\\\",\\\"message\\\":\\\"2025-10-02T18:21:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f1fe9669-6747-4f8b-bfde-f1d556480d93\\\\n2025-10-02T18:21:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f1fe9669-6747-4f8b-bfde-f1d556480d93 to /host/opt/cni/bin/\\\\n2025-10-02T18:21:16Z [verbose] multus-daemon started\\\\n2025-10-02T18:21:16Z [verbose] Readiness Indicator file check\\\\n2025-10-02T18:22:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:22:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7qhh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhm4n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.462353 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd2b0ccb-39fd-4c79-9247-7a3587586b45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:20:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://86a6dd2fc4d240a9815247216b395bf0f35f72e85ff212382417294cee19a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8522be0d2b7e615f4a2b4ace667555881d0d0b7d0107cfb8c89a68f9921afa8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbf851b49f7c1d533979797ecedcd032c4e3eb284eb75282a982e11cab565d51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://021b5cfbf841a2366f8d3a6bed37613827e4c1415ed388d080255fca066aae06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:20:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.477516 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.491068 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.491111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.491123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.491149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.491164 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:15Z","lastTransitionTime":"2025-10-02T18:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.496487 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e93ac374-cf01-41ab-a628-5c2cb5de7437\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eae23d229c011bf92c67fc69b07557ddbabc7988d7df83d979ed0c6d1fad2727\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zc7fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hc6sg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.510903 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-m27c2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8adcf2d1-6a80-40e8-a94b-627c2b18443f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cd6lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-m27c2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.526447 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:12Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3881c3fb0224f139029c71fe4e21756e8158c9c60151da56b84a4a877897e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.544550 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjjsq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a08746c9-6dd1-4414-a681-c8a254264429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bc7d44d4ffcf0a9f76b7fa6b0153ec9e40863c9ba76fd176e2a6da0d841b8cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wf9fg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:13Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjjsq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.558815 4832 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3b59d5f-e3e4-403f-a165-f83220d4a0de\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-02T18:21:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://880fbcf69f588efc0d84051a39d65ecdb63ecd3e385cc980002d9e8b244e5ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58e7fb059407bc2b12829e8e11812994f9eff51cb1ec4eb08fa704d25f498d2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-02T18:21:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-glkzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-02T18:21:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-k9hrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:15Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.594365 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.594647 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.594745 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.594844 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.594941 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:15Z","lastTransitionTime":"2025-10-02T18:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.699831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.699940 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.699964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.699997 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.700017 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:15Z","lastTransitionTime":"2025-10-02T18:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.802873 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.803329 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.803487 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.803559 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.803582 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:15Z","lastTransitionTime":"2025-10-02T18:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.906788 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.907240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.907497 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.907648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:15 crc kubenswrapper[4832]: I1002 18:22:15.907980 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:15Z","lastTransitionTime":"2025-10-02T18:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.011773 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.011832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.011843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.011866 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.011882 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:16Z","lastTransitionTime":"2025-10-02T18:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.115592 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.115643 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.115655 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.115674 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.115687 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:16Z","lastTransitionTime":"2025-10-02T18:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.218244 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.218570 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.218666 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.218757 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.218847 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:16Z","lastTransitionTime":"2025-10-02T18:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.222546 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.222587 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:16 crc kubenswrapper[4832]: E1002 18:22:16.222712 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:16 crc kubenswrapper[4832]: E1002 18:22:16.222897 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.322049 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.322381 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.322459 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.322523 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.322629 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:16Z","lastTransitionTime":"2025-10-02T18:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.426057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.426101 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.426114 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.426133 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.426145 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:16Z","lastTransitionTime":"2025-10-02T18:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.528965 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.529018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.529035 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.529062 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.529076 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:16Z","lastTransitionTime":"2025-10-02T18:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.632156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.632229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.632239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.632281 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.632297 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:16Z","lastTransitionTime":"2025-10-02T18:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.735071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.735462 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.735524 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.735620 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.735679 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:16Z","lastTransitionTime":"2025-10-02T18:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.838789 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.838838 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.838849 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.838869 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.838880 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:16Z","lastTransitionTime":"2025-10-02T18:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.942240 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.942547 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.942664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.942746 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:16 crc kubenswrapper[4832]: I1002 18:22:16.942832 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:16Z","lastTransitionTime":"2025-10-02T18:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.045328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.045938 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.046008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.046084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.046165 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:17Z","lastTransitionTime":"2025-10-02T18:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.149650 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.149702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.149716 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.149735 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.149750 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:17Z","lastTransitionTime":"2025-10-02T18:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.222808 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.222886 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:17 crc kubenswrapper[4832]: E1002 18:22:17.223003 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:17 crc kubenswrapper[4832]: E1002 18:22:17.223158 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.253024 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.253084 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.253102 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.253127 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.253142 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:17Z","lastTransitionTime":"2025-10-02T18:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.356892 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.356944 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.356958 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.356978 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.356990 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:17Z","lastTransitionTime":"2025-10-02T18:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.419388 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.419465 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.419560 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.419605 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.419631 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:17Z","lastTransitionTime":"2025-10-02T18:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:17 crc kubenswrapper[4832]: E1002 18:22:17.440357 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.446233 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.446310 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.446323 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.446342 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.446354 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:17Z","lastTransitionTime":"2025-10-02T18:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:17 crc kubenswrapper[4832]: E1002 18:22:17.463144 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.468316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.468370 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.468386 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.468406 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.468417 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:17Z","lastTransitionTime":"2025-10-02T18:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:17 crc kubenswrapper[4832]: E1002 18:22:17.488220 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.493812 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.493904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.493924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.493949 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.493994 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:17Z","lastTransitionTime":"2025-10-02T18:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:17 crc kubenswrapper[4832]: E1002 18:22:17.513370 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.517565 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.517607 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.517618 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.517636 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.517649 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:17Z","lastTransitionTime":"2025-10-02T18:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:17 crc kubenswrapper[4832]: E1002 18:22:17.530967 4832 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-02T18:22:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7a67a654-6d49-4a75-b64e-12cb73cb5c72\\\",\\\"systemUUID\\\":\\\"d8b5d84a-8c2b-4d28-a6b6-5242a6efed6a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-02T18:22:17Z is after 2025-08-24T17:21:41Z" Oct 02 18:22:17 crc kubenswrapper[4832]: E1002 18:22:17.531163 4832 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.533185 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.533477 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.533495 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.533515 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.533528 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:17Z","lastTransitionTime":"2025-10-02T18:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.635568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.635610 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.635620 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.635634 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.635645 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:17Z","lastTransitionTime":"2025-10-02T18:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.738536 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.738581 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.738595 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.738616 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.738629 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:17Z","lastTransitionTime":"2025-10-02T18:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.841474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.841522 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.841534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.841551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.841562 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:17Z","lastTransitionTime":"2025-10-02T18:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.944914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.945021 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.945048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.945076 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:17 crc kubenswrapper[4832]: I1002 18:22:17.945096 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:17Z","lastTransitionTime":"2025-10-02T18:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.047352 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.047382 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.047392 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.047407 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.047417 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:18Z","lastTransitionTime":"2025-10-02T18:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.151418 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.151484 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.151508 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.151540 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.151561 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:18Z","lastTransitionTime":"2025-10-02T18:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.222153 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.222233 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:18 crc kubenswrapper[4832]: E1002 18:22:18.222378 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:18 crc kubenswrapper[4832]: E1002 18:22:18.222443 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.254174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.254211 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.254221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.254237 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.254248 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:18Z","lastTransitionTime":"2025-10-02T18:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.356975 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.357032 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.357045 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.357064 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.357079 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:18Z","lastTransitionTime":"2025-10-02T18:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.459943 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.460019 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.460040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.460069 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.460090 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:18Z","lastTransitionTime":"2025-10-02T18:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.563360 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.563449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.563474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.563506 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.563528 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:18Z","lastTransitionTime":"2025-10-02T18:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.666736 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.666770 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.666780 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.666799 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.666809 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:18Z","lastTransitionTime":"2025-10-02T18:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.769247 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.769347 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.769400 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.769425 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.769443 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:18Z","lastTransitionTime":"2025-10-02T18:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.871981 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.872366 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.872480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.872573 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.872660 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:18Z","lastTransitionTime":"2025-10-02T18:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.975894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.975938 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.975949 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.975964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:18 crc kubenswrapper[4832]: I1002 18:22:18.975975 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:18Z","lastTransitionTime":"2025-10-02T18:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.078755 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.078833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.078848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.078865 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.078877 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:19Z","lastTransitionTime":"2025-10-02T18:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.181227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.181648 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.181749 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.181836 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.181927 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:19Z","lastTransitionTime":"2025-10-02T18:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.222117 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:19 crc kubenswrapper[4832]: E1002 18:22:19.222541 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.222167 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:19 crc kubenswrapper[4832]: E1002 18:22:19.222838 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.284175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.284227 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.284239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.284256 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.284290 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:19Z","lastTransitionTime":"2025-10-02T18:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.387741 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.387809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.387823 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.387841 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.387853 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:19Z","lastTransitionTime":"2025-10-02T18:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.491785 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.491852 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.491870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.491893 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.491910 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:19Z","lastTransitionTime":"2025-10-02T18:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.595206 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.595282 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.595295 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.595316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.595330 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:19Z","lastTransitionTime":"2025-10-02T18:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.698107 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.698176 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.698194 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.698280 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.698301 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:19Z","lastTransitionTime":"2025-10-02T18:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.801187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.801397 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.801427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.801458 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.801482 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:19Z","lastTransitionTime":"2025-10-02T18:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.904467 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.904534 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.904556 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.904582 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:19 crc kubenswrapper[4832]: I1002 18:22:19.904600 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:19Z","lastTransitionTime":"2025-10-02T18:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.007443 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.007513 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.007531 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.007558 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.007576 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:20Z","lastTransitionTime":"2025-10-02T18:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.111065 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.111137 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.111160 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.111190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.111208 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:20Z","lastTransitionTime":"2025-10-02T18:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.214123 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.214175 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.214186 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.214203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.214214 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:20Z","lastTransitionTime":"2025-10-02T18:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.222636 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:20 crc kubenswrapper[4832]: E1002 18:22:20.222742 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.222823 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:20 crc kubenswrapper[4832]: E1002 18:22:20.223127 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.317702 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.317779 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.317804 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.317837 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.317861 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:20Z","lastTransitionTime":"2025-10-02T18:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.421932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.422018 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.422038 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.422073 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.422095 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:20Z","lastTransitionTime":"2025-10-02T18:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.524910 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.524970 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.524983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.525005 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.525019 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:20Z","lastTransitionTime":"2025-10-02T18:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.628425 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.628967 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.628982 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.629002 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.629019 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:20Z","lastTransitionTime":"2025-10-02T18:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.732168 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.732316 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.732336 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.732364 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.732383 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:20Z","lastTransitionTime":"2025-10-02T18:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.835868 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.835932 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.835949 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.835974 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.836003 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:20Z","lastTransitionTime":"2025-10-02T18:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.938954 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.939008 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.939023 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.939040 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:20 crc kubenswrapper[4832]: I1002 18:22:20.939051 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:20Z","lastTransitionTime":"2025-10-02T18:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.041874 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.041955 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.041986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.042017 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.042039 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:21Z","lastTransitionTime":"2025-10-02T18:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.145351 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.145394 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.145411 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.145432 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.145450 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:21Z","lastTransitionTime":"2025-10-02T18:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.225487 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:21 crc kubenswrapper[4832]: E1002 18:22:21.225652 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.225883 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:21 crc kubenswrapper[4832]: E1002 18:22:21.225974 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.247802 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.247848 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.247856 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.247879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.247892 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:21Z","lastTransitionTime":"2025-10-02T18:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.350904 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.350952 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.350964 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.350983 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.350995 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:21Z","lastTransitionTime":"2025-10-02T18:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.454520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.454578 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.454592 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.454613 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.454626 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:21Z","lastTransitionTime":"2025-10-02T18:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.558094 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.558158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.558174 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.558197 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.558213 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:21Z","lastTransitionTime":"2025-10-02T18:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.661839 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.661872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.661880 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.661896 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.661907 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:21Z","lastTransitionTime":"2025-10-02T18:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.764800 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.764872 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.764887 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.764909 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.764925 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:21Z","lastTransitionTime":"2025-10-02T18:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.867752 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.867797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.867809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.867827 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.867839 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:21Z","lastTransitionTime":"2025-10-02T18:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.985155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.985205 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.985221 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.985239 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:21 crc kubenswrapper[4832]: I1002 18:22:21.985251 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:21Z","lastTransitionTime":"2025-10-02T18:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.089057 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.089126 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.089141 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.089164 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.089178 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:22Z","lastTransitionTime":"2025-10-02T18:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.192187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.192300 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.192520 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.192557 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.192582 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:22Z","lastTransitionTime":"2025-10-02T18:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.222865 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.222904 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:22 crc kubenswrapper[4832]: E1002 18:22:22.223031 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:22 crc kubenswrapper[4832]: E1002 18:22:22.223223 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.296001 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.296067 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.296079 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.296103 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.296116 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:22Z","lastTransitionTime":"2025-10-02T18:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.399294 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.399364 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.399382 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.399410 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.399425 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:22Z","lastTransitionTime":"2025-10-02T18:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.502356 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.502413 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.502427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.502451 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.502467 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:22Z","lastTransitionTime":"2025-10-02T18:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.605609 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.605676 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.605690 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.605739 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.605751 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:22Z","lastTransitionTime":"2025-10-02T18:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.710349 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.710405 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.710416 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.710436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.710451 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:22Z","lastTransitionTime":"2025-10-02T18:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.813099 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.813179 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.813209 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.813234 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.813254 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:22Z","lastTransitionTime":"2025-10-02T18:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.916847 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.916908 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.916924 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.916953 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:22 crc kubenswrapper[4832]: I1002 18:22:22.916971 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:22Z","lastTransitionTime":"2025-10-02T18:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.020376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.020436 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.020449 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.020472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.020488 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:23Z","lastTransitionTime":"2025-10-02T18:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.123697 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.123742 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.123755 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.123774 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.123785 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:23Z","lastTransitionTime":"2025-10-02T18:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.222633 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.222898 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:23 crc kubenswrapper[4832]: E1002 18:22:23.223087 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:23 crc kubenswrapper[4832]: E1002 18:22:23.223234 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.227420 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.227461 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.227472 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.227498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.227512 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:23Z","lastTransitionTime":"2025-10-02T18:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.331083 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.331148 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.331162 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.331190 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.331205 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:23Z","lastTransitionTime":"2025-10-02T18:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.434446 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.434490 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.434503 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.434521 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.434533 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:23Z","lastTransitionTime":"2025-10-02T18:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.537382 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.537462 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.537479 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.537500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.537541 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:23Z","lastTransitionTime":"2025-10-02T18:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.640220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.640296 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.640309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.640330 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.640342 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:23Z","lastTransitionTime":"2025-10-02T18:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.743395 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.743466 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.743480 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.743500 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.743518 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:23Z","lastTransitionTime":"2025-10-02T18:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.848188 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.848248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.848299 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.848328 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.848345 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:23Z","lastTransitionTime":"2025-10-02T18:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.951249 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.951338 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.951351 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.951372 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:23 crc kubenswrapper[4832]: I1002 18:22:23.951385 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:23Z","lastTransitionTime":"2025-10-02T18:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.054521 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.054568 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.054578 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.054594 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.054606 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:24Z","lastTransitionTime":"2025-10-02T18:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.157427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.157499 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.157517 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.157543 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.157564 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:24Z","lastTransitionTime":"2025-10-02T18:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.221932 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.221998 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:24 crc kubenswrapper[4832]: E1002 18:22:24.222250 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:24 crc kubenswrapper[4832]: E1002 18:22:24.222399 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.261202 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.261309 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.261335 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.261362 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.261379 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:24Z","lastTransitionTime":"2025-10-02T18:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.369569 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.369661 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.369686 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.369721 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.369744 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:24Z","lastTransitionTime":"2025-10-02T18:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.472491 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.472542 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.472560 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.472584 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.472603 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:24Z","lastTransitionTime":"2025-10-02T18:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.576986 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.577039 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.577052 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.577071 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.577084 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:24Z","lastTransitionTime":"2025-10-02T18:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.679404 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.679444 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.679455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.679469 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.679481 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:24Z","lastTransitionTime":"2025-10-02T18:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.782631 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.782682 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.782699 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.782722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.782738 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:24Z","lastTransitionTime":"2025-10-02T18:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.885763 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.885833 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.885850 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.885881 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.885898 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:24Z","lastTransitionTime":"2025-10-02T18:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.988357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.988393 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.988642 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.988664 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:24 crc kubenswrapper[4832]: I1002 18:22:24.988677 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:24Z","lastTransitionTime":"2025-10-02T18:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.090834 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.090870 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.090879 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.090894 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.090903 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:25Z","lastTransitionTime":"2025-10-02T18:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.193385 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.193455 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.193473 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.193498 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.193517 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:25Z","lastTransitionTime":"2025-10-02T18:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.223718 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:25 crc kubenswrapper[4832]: E1002 18:22:25.223902 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.223727 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:25 crc kubenswrapper[4832]: E1002 18:22:25.224889 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.242942 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.285213 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=71.285193261 podStartE2EDuration="1m11.285193261s" podCreationTimestamp="2025-10-02 18:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:22:25.269440161 +0000 UTC m=+102.238883103" watchObservedRunningTime="2025-10-02 18:22:25.285193261 +0000 UTC m=+102.254636143" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.297155 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.297200 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.297213 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.297238 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.297252 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:25Z","lastTransitionTime":"2025-10-02T18:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.302343 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podStartSLOduration=72.302312392 podStartE2EDuration="1m12.302312392s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:22:25.302254701 +0000 UTC m=+102.271697603" watchObservedRunningTime="2025-10-02 18:22:25.302312392 +0000 UTC m=+102.271755264" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.330007 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fjjsq" podStartSLOduration=72.329984326 podStartE2EDuration="1m12.329984326s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:22:25.329328497 +0000 UTC m=+102.298771419" watchObservedRunningTime="2025-10-02 18:22:25.329984326 +0000 UTC m=+102.299427208" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.342759 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-k9hrg" podStartSLOduration=72.342738885 podStartE2EDuration="1m12.342738885s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:22:25.342378774 +0000 UTC m=+102.311821666" watchObservedRunningTime="2025-10-02 18:22:25.342738885 +0000 UTC m=+102.312181767" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.372428 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qdd5f" podStartSLOduration=72.372408449 podStartE2EDuration="1m12.372408449s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:22:25.355513484 +0000 UTC m=+102.324956356" watchObservedRunningTime="2025-10-02 18:22:25.372408449 +0000 UTC m=+102.341851321" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.400248 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.400321 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.400333 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.400353 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.400365 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:25Z","lastTransitionTime":"2025-10-02T18:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.502764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.502831 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.502842 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.502862 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.502876 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:25Z","lastTransitionTime":"2025-10-02T18:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.521399 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-zqjdg" podStartSLOduration=72.52136928 podStartE2EDuration="1m12.52136928s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:22:25.506089354 +0000 UTC m=+102.475532226" watchObservedRunningTime="2025-10-02 18:22:25.52136928 +0000 UTC m=+102.490812152" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.535809 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lhm4n" podStartSLOduration=72.535776049 podStartE2EDuration="1m12.535776049s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:22:25.521103382 +0000 UTC m=+102.490546254" watchObservedRunningTime="2025-10-02 18:22:25.535776049 +0000 UTC m=+102.505218921" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.536851 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=46.536841012 podStartE2EDuration="46.536841012s" podCreationTimestamp="2025-10-02 18:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:22:25.535524681 +0000 UTC m=+102.504967583" watchObservedRunningTime="2025-10-02 18:22:25.536841012 +0000 UTC m=+102.506283884" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.569363 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=76.569344393 podStartE2EDuration="1m16.569344393s" podCreationTimestamp="2025-10-02 18:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:22:25.569228559 +0000 UTC m=+102.538671441" watchObservedRunningTime="2025-10-02 18:22:25.569344393 +0000 UTC m=+102.538787265" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.569974 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=23.569968121 podStartE2EDuration="23.569968121s" podCreationTimestamp="2025-10-02 18:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:22:25.553171109 +0000 UTC m=+102.522613981" watchObservedRunningTime="2025-10-02 18:22:25.569968121 +0000 UTC m=+102.539410993" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.605350 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.605388 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.605399 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.605414 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.605424 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:25Z","lastTransitionTime":"2025-10-02T18:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.707722 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.707764 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.707775 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.707790 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.707801 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:25Z","lastTransitionTime":"2025-10-02T18:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.809755 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.809803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.809815 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.809832 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.809844 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:25Z","lastTransitionTime":"2025-10-02T18:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.912156 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.912229 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.912245 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.912308 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:25 crc kubenswrapper[4832]: I1002 18:22:25.912326 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:25Z","lastTransitionTime":"2025-10-02T18:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.015056 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.015100 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.015109 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.015126 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.015135 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:26Z","lastTransitionTime":"2025-10-02T18:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.117149 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.117191 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.117203 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.117220 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.117233 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:26Z","lastTransitionTime":"2025-10-02T18:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.220089 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.220460 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.220551 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.220633 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.220731 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:26Z","lastTransitionTime":"2025-10-02T18:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.222486 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.222702 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:26 crc kubenswrapper[4832]: E1002 18:22:26.222749 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:26 crc kubenswrapper[4832]: E1002 18:22:26.223228 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.223554 4832 scope.go:117] "RemoveContainer" containerID="3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb" Oct 02 18:22:26 crc kubenswrapper[4832]: E1002 18:22:26.223744 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.323122 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.323158 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.323171 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.323187 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.323197 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:26Z","lastTransitionTime":"2025-10-02T18:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.426874 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.427243 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.427474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.427627 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.427817 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:26Z","lastTransitionTime":"2025-10-02T18:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.530892 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.530947 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.530960 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.530989 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.531002 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:26Z","lastTransitionTime":"2025-10-02T18:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.633843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.633914 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.633926 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.633946 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.633957 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:26Z","lastTransitionTime":"2025-10-02T18:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.736044 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.736098 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.736111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.736129 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.736140 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:26Z","lastTransitionTime":"2025-10-02T18:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.838357 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.838392 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.838402 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.838419 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.838428 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:26Z","lastTransitionTime":"2025-10-02T18:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.941330 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.941423 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.941451 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.941474 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:26 crc kubenswrapper[4832]: I1002 18:22:26.941491 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:26Z","lastTransitionTime":"2025-10-02T18:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.044632 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.044729 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.044776 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.044803 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.044821 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:27Z","lastTransitionTime":"2025-10-02T18:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.146917 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.147250 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.147376 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.147457 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.147525 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:27Z","lastTransitionTime":"2025-10-02T18:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.222901 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.222913 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:27 crc kubenswrapper[4832]: E1002 18:22:27.223105 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:27 crc kubenswrapper[4832]: E1002 18:22:27.223169 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.250939 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.251003 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.251022 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.251048 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.251069 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:27Z","lastTransitionTime":"2025-10-02T18:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.353792 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.353822 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.353830 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.353843 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.353853 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:27Z","lastTransitionTime":"2025-10-02T18:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.456392 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.456427 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.456435 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.456448 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.456456 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:27Z","lastTransitionTime":"2025-10-02T18:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.558748 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.559111 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.559180 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.559272 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.559340 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:27Z","lastTransitionTime":"2025-10-02T18:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.579750 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.579797 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.579809 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.579827 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.579838 4832 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-02T18:22:27Z","lastTransitionTime":"2025-10-02T18:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.627163 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85"] Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.627652 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.630477 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.630542 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.630677 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.632495 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.650564 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.650632 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.650666 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.650684 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.650700 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.669942 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.669914343 podStartE2EDuration="2.669914343s" podCreationTimestamp="2025-10-02 18:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:22:27.667699806 +0000 UTC m=+104.637142698" watchObservedRunningTime="2025-10-02 18:22:27.669914343 +0000 UTC m=+104.639357255" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.751623 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.752025 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.752161 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.752254 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.752428 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.752498 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.752611 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.752769 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.764002 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.771189 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wxj85\" (UID: \"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:27 crc kubenswrapper[4832]: I1002 18:22:27.949617 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" Oct 02 18:22:28 crc kubenswrapper[4832]: I1002 18:22:28.222691 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:28 crc kubenswrapper[4832]: I1002 18:22:28.222787 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:28 crc kubenswrapper[4832]: E1002 18:22:28.223481 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:28 crc kubenswrapper[4832]: E1002 18:22:28.223759 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:28 crc kubenswrapper[4832]: I1002 18:22:28.863886 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" event={"ID":"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d","Type":"ContainerStarted","Data":"20cb4d8aaee7646b6cca7cf987ae7d0895ac18ff299a5fc2402d79a998e1c460"} Oct 02 18:22:28 crc kubenswrapper[4832]: I1002 18:22:28.863947 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" event={"ID":"b0ef4f44-609d-4b8a-9d4f-ca9fcb47c25d","Type":"ContainerStarted","Data":"22c50f2c2314c82b98d366879be97a27e2f254ed7e0e134a10d39765ed43b238"} Oct 02 18:22:28 crc kubenswrapper[4832]: I1002 18:22:28.881018 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wxj85" podStartSLOduration=75.880995211 podStartE2EDuration="1m15.880995211s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:22:28.88028363 +0000 UTC m=+105.849726502" watchObservedRunningTime="2025-10-02 18:22:28.880995211 +0000 UTC m=+105.850438083" Oct 02 18:22:29 crc kubenswrapper[4832]: I1002 18:22:29.222679 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:29 crc kubenswrapper[4832]: E1002 18:22:29.222911 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:29 crc kubenswrapper[4832]: I1002 18:22:29.223310 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:29 crc kubenswrapper[4832]: E1002 18:22:29.223473 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:30 crc kubenswrapper[4832]: I1002 18:22:30.221846 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:30 crc kubenswrapper[4832]: E1002 18:22:30.222741 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:30 crc kubenswrapper[4832]: I1002 18:22:30.221858 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:30 crc kubenswrapper[4832]: E1002 18:22:30.223312 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:31 crc kubenswrapper[4832]: I1002 18:22:31.222392 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:31 crc kubenswrapper[4832]: I1002 18:22:31.222406 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:31 crc kubenswrapper[4832]: E1002 18:22:31.222811 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:31 crc kubenswrapper[4832]: E1002 18:22:31.222915 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:31 crc kubenswrapper[4832]: I1002 18:22:31.799215 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs\") pod \"network-metrics-daemon-m27c2\" (UID: \"8adcf2d1-6a80-40e8-a94b-627c2b18443f\") " pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:31 crc kubenswrapper[4832]: E1002 18:22:31.799468 4832 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:22:31 crc kubenswrapper[4832]: E1002 18:22:31.800095 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs podName:8adcf2d1-6a80-40e8-a94b-627c2b18443f nodeName:}" failed. No retries permitted until 2025-10-02 18:23:35.80004958 +0000 UTC m=+172.769492492 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs") pod "network-metrics-daemon-m27c2" (UID: "8adcf2d1-6a80-40e8-a94b-627c2b18443f") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 02 18:22:32 crc kubenswrapper[4832]: I1002 18:22:32.221782 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:32 crc kubenswrapper[4832]: I1002 18:22:32.221800 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:32 crc kubenswrapper[4832]: E1002 18:22:32.221983 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:32 crc kubenswrapper[4832]: E1002 18:22:32.222059 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:33 crc kubenswrapper[4832]: I1002 18:22:33.222431 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:33 crc kubenswrapper[4832]: I1002 18:22:33.222453 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:33 crc kubenswrapper[4832]: E1002 18:22:33.222680 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:33 crc kubenswrapper[4832]: E1002 18:22:33.223004 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:34 crc kubenswrapper[4832]: I1002 18:22:34.222404 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:34 crc kubenswrapper[4832]: E1002 18:22:34.223073 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:34 crc kubenswrapper[4832]: I1002 18:22:34.222438 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:34 crc kubenswrapper[4832]: E1002 18:22:34.223709 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:35 crc kubenswrapper[4832]: I1002 18:22:35.223393 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:35 crc kubenswrapper[4832]: E1002 18:22:35.223735 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:35 crc kubenswrapper[4832]: I1002 18:22:35.224176 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:35 crc kubenswrapper[4832]: E1002 18:22:35.224467 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:36 crc kubenswrapper[4832]: I1002 18:22:36.222913 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:36 crc kubenswrapper[4832]: I1002 18:22:36.222959 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:36 crc kubenswrapper[4832]: E1002 18:22:36.223202 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:36 crc kubenswrapper[4832]: E1002 18:22:36.223445 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:37 crc kubenswrapper[4832]: I1002 18:22:37.222224 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:37 crc kubenswrapper[4832]: I1002 18:22:37.222298 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:37 crc kubenswrapper[4832]: E1002 18:22:37.222522 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:37 crc kubenswrapper[4832]: E1002 18:22:37.222788 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:37 crc kubenswrapper[4832]: I1002 18:22:37.223934 4832 scope.go:117] "RemoveContainer" containerID="3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb" Oct 02 18:22:37 crc kubenswrapper[4832]: E1002 18:22:37.224208 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" Oct 02 18:22:38 crc kubenswrapper[4832]: I1002 18:22:38.222702 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:38 crc kubenswrapper[4832]: I1002 18:22:38.222823 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:38 crc kubenswrapper[4832]: E1002 18:22:38.223448 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:38 crc kubenswrapper[4832]: E1002 18:22:38.223652 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:39 crc kubenswrapper[4832]: I1002 18:22:39.222812 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:39 crc kubenswrapper[4832]: I1002 18:22:39.222883 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:39 crc kubenswrapper[4832]: E1002 18:22:39.223089 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:39 crc kubenswrapper[4832]: E1002 18:22:39.223165 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:40 crc kubenswrapper[4832]: I1002 18:22:40.222580 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:40 crc kubenswrapper[4832]: I1002 18:22:40.222626 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:40 crc kubenswrapper[4832]: E1002 18:22:40.222917 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:40 crc kubenswrapper[4832]: E1002 18:22:40.223201 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:41 crc kubenswrapper[4832]: I1002 18:22:41.222965 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:41 crc kubenswrapper[4832]: E1002 18:22:41.223207 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:41 crc kubenswrapper[4832]: I1002 18:22:41.223728 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:41 crc kubenswrapper[4832]: E1002 18:22:41.223973 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:42 crc kubenswrapper[4832]: I1002 18:22:42.222228 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:42 crc kubenswrapper[4832]: I1002 18:22:42.222376 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:42 crc kubenswrapper[4832]: E1002 18:22:42.222609 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:42 crc kubenswrapper[4832]: E1002 18:22:42.222835 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:43 crc kubenswrapper[4832]: I1002 18:22:43.222412 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:43 crc kubenswrapper[4832]: I1002 18:22:43.222418 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:43 crc kubenswrapper[4832]: E1002 18:22:43.222656 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:43 crc kubenswrapper[4832]: E1002 18:22:43.222790 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:44 crc kubenswrapper[4832]: I1002 18:22:44.222434 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:44 crc kubenswrapper[4832]: I1002 18:22:44.222493 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:44 crc kubenswrapper[4832]: E1002 18:22:44.222679 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:44 crc kubenswrapper[4832]: E1002 18:22:44.222842 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:45 crc kubenswrapper[4832]: I1002 18:22:45.222624 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:45 crc kubenswrapper[4832]: E1002 18:22:45.222800 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:45 crc kubenswrapper[4832]: I1002 18:22:45.223191 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:45 crc kubenswrapper[4832]: E1002 18:22:45.223660 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:45 crc kubenswrapper[4832]: E1002 18:22:45.227903 4832 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 02 18:22:45 crc kubenswrapper[4832]: E1002 18:22:45.432705 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:22:46 crc kubenswrapper[4832]: I1002 18:22:46.222179 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:46 crc kubenswrapper[4832]: I1002 18:22:46.222202 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:46 crc kubenswrapper[4832]: E1002 18:22:46.222359 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:46 crc kubenswrapper[4832]: E1002 18:22:46.222426 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:47 crc kubenswrapper[4832]: I1002 18:22:47.222938 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:47 crc kubenswrapper[4832]: E1002 18:22:47.223462 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:47 crc kubenswrapper[4832]: I1002 18:22:47.223854 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:47 crc kubenswrapper[4832]: E1002 18:22:47.223954 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:48 crc kubenswrapper[4832]: I1002 18:22:48.222573 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:48 crc kubenswrapper[4832]: I1002 18:22:48.222616 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:48 crc kubenswrapper[4832]: E1002 18:22:48.222957 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:48 crc kubenswrapper[4832]: E1002 18:22:48.223128 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:48 crc kubenswrapper[4832]: I1002 18:22:48.223319 4832 scope.go:117] "RemoveContainer" containerID="3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb" Oct 02 18:22:48 crc kubenswrapper[4832]: E1002 18:22:48.223502 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9sz9w_openshift-ovn-kubernetes(28e6c98b-e4b6-4027-8cf5-655985e80fac)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" Oct 02 18:22:49 crc kubenswrapper[4832]: I1002 18:22:49.221830 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:49 crc kubenswrapper[4832]: E1002 18:22:49.222059 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:49 crc kubenswrapper[4832]: I1002 18:22:49.222161 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:49 crc kubenswrapper[4832]: E1002 18:22:49.222552 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:49 crc kubenswrapper[4832]: I1002 18:22:49.942561 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhm4n_7319e265-17de-4801-8ab7-7671dba7489d/kube-multus/1.log" Oct 02 18:22:49 crc kubenswrapper[4832]: I1002 18:22:49.944034 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhm4n_7319e265-17de-4801-8ab7-7671dba7489d/kube-multus/0.log" Oct 02 18:22:49 crc kubenswrapper[4832]: I1002 18:22:49.944105 4832 generic.go:334] "Generic (PLEG): container finished" podID="7319e265-17de-4801-8ab7-7671dba7489d" containerID="fb1426b011d18013e2707b04e8f6d79821c592635976c3a58c7ff94c0f2135c3" exitCode=1 Oct 02 18:22:49 crc kubenswrapper[4832]: I1002 18:22:49.944164 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhm4n" event={"ID":"7319e265-17de-4801-8ab7-7671dba7489d","Type":"ContainerDied","Data":"fb1426b011d18013e2707b04e8f6d79821c592635976c3a58c7ff94c0f2135c3"} Oct 02 18:22:49 crc kubenswrapper[4832]: I1002 18:22:49.944242 4832 scope.go:117] "RemoveContainer" containerID="897dcec40f9f159d47b683761668da49e0c2d078c0894da561e819e3a31ad3db" Oct 02 18:22:49 crc kubenswrapper[4832]: I1002 18:22:49.949382 4832 scope.go:117] "RemoveContainer" containerID="fb1426b011d18013e2707b04e8f6d79821c592635976c3a58c7ff94c0f2135c3" Oct 02 18:22:49 crc kubenswrapper[4832]: E1002 18:22:49.949577 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lhm4n_openshift-multus(7319e265-17de-4801-8ab7-7671dba7489d)\"" pod="openshift-multus/multus-lhm4n" podUID="7319e265-17de-4801-8ab7-7671dba7489d" Oct 02 18:22:50 crc kubenswrapper[4832]: I1002 18:22:50.222028 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:50 crc kubenswrapper[4832]: I1002 18:22:50.222086 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:50 crc kubenswrapper[4832]: E1002 18:22:50.222247 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:50 crc kubenswrapper[4832]: E1002 18:22:50.222391 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:50 crc kubenswrapper[4832]: E1002 18:22:50.434437 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:22:50 crc kubenswrapper[4832]: I1002 18:22:50.950782 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhm4n_7319e265-17de-4801-8ab7-7671dba7489d/kube-multus/1.log" Oct 02 18:22:51 crc kubenswrapper[4832]: I1002 18:22:51.222338 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:51 crc kubenswrapper[4832]: I1002 18:22:51.222388 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:51 crc kubenswrapper[4832]: E1002 18:22:51.222548 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:51 crc kubenswrapper[4832]: E1002 18:22:51.222768 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:52 crc kubenswrapper[4832]: I1002 18:22:52.222394 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:52 crc kubenswrapper[4832]: I1002 18:22:52.222394 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:52 crc kubenswrapper[4832]: E1002 18:22:52.222582 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:52 crc kubenswrapper[4832]: E1002 18:22:52.222694 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:53 crc kubenswrapper[4832]: I1002 18:22:53.222593 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:53 crc kubenswrapper[4832]: I1002 18:22:53.222709 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:53 crc kubenswrapper[4832]: E1002 18:22:53.222812 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:53 crc kubenswrapper[4832]: E1002 18:22:53.222959 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:54 crc kubenswrapper[4832]: I1002 18:22:54.222550 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:54 crc kubenswrapper[4832]: I1002 18:22:54.222700 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:54 crc kubenswrapper[4832]: E1002 18:22:54.222753 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:54 crc kubenswrapper[4832]: E1002 18:22:54.222976 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:55 crc kubenswrapper[4832]: I1002 18:22:55.222076 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:55 crc kubenswrapper[4832]: I1002 18:22:55.222088 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:55 crc kubenswrapper[4832]: E1002 18:22:55.224021 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:55 crc kubenswrapper[4832]: E1002 18:22:55.224259 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:55 crc kubenswrapper[4832]: E1002 18:22:55.435380 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:22:56 crc kubenswrapper[4832]: I1002 18:22:56.221875 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:56 crc kubenswrapper[4832]: I1002 18:22:56.221909 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:56 crc kubenswrapper[4832]: E1002 18:22:56.222212 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:56 crc kubenswrapper[4832]: E1002 18:22:56.222428 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:57 crc kubenswrapper[4832]: I1002 18:22:57.222764 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:57 crc kubenswrapper[4832]: I1002 18:22:57.222870 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:57 crc kubenswrapper[4832]: E1002 18:22:57.222937 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:57 crc kubenswrapper[4832]: E1002 18:22:57.223082 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:22:58 crc kubenswrapper[4832]: I1002 18:22:58.222335 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:22:58 crc kubenswrapper[4832]: I1002 18:22:58.222351 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:22:58 crc kubenswrapper[4832]: E1002 18:22:58.222587 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:22:58 crc kubenswrapper[4832]: E1002 18:22:58.222894 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:22:59 crc kubenswrapper[4832]: I1002 18:22:59.222327 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:22:59 crc kubenswrapper[4832]: I1002 18:22:59.222499 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:22:59 crc kubenswrapper[4832]: E1002 18:22:59.222552 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:22:59 crc kubenswrapper[4832]: E1002 18:22:59.222780 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:23:00 crc kubenswrapper[4832]: I1002 18:23:00.222034 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:23:00 crc kubenswrapper[4832]: I1002 18:23:00.222091 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:23:00 crc kubenswrapper[4832]: E1002 18:23:00.222292 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:23:00 crc kubenswrapper[4832]: E1002 18:23:00.222428 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:23:00 crc kubenswrapper[4832]: E1002 18:23:00.436881 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:23:01 crc kubenswrapper[4832]: I1002 18:23:01.222619 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:23:01 crc kubenswrapper[4832]: E1002 18:23:01.222807 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:23:01 crc kubenswrapper[4832]: I1002 18:23:01.222873 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:23:01 crc kubenswrapper[4832]: E1002 18:23:01.223693 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:23:01 crc kubenswrapper[4832]: I1002 18:23:01.224147 4832 scope.go:117] "RemoveContainer" containerID="3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb" Oct 02 18:23:01 crc kubenswrapper[4832]: I1002 18:23:01.990475 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/3.log" Oct 02 18:23:01 crc kubenswrapper[4832]: I1002 18:23:01.995161 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerStarted","Data":"7b7ed9c483dab864dba141917bd6140e6b26be62665400cf5e2a0ddc4cbc418e"} Oct 02 18:23:01 crc kubenswrapper[4832]: I1002 18:23:01.995951 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:23:02 crc kubenswrapper[4832]: I1002 18:23:02.030861 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podStartSLOduration=109.030822925 podStartE2EDuration="1m49.030822925s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:02.030410573 +0000 UTC m=+138.999853485" watchObservedRunningTime="2025-10-02 18:23:02.030822925 +0000 UTC m=+139.000265807" Oct 02 18:23:02 crc kubenswrapper[4832]: I1002 18:23:02.222046 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:23:02 crc kubenswrapper[4832]: I1002 18:23:02.222083 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:23:02 crc kubenswrapper[4832]: E1002 18:23:02.222342 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:23:02 crc kubenswrapper[4832]: E1002 18:23:02.222719 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:23:02 crc kubenswrapper[4832]: I1002 18:23:02.227526 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-m27c2"] Oct 02 18:23:02 crc kubenswrapper[4832]: I1002 18:23:02.998783 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:23:03 crc kubenswrapper[4832]: E1002 18:23:02.999044 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:23:03 crc kubenswrapper[4832]: I1002 18:23:03.221975 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:23:03 crc kubenswrapper[4832]: I1002 18:23:03.222001 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:23:03 crc kubenswrapper[4832]: E1002 18:23:03.222163 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:23:03 crc kubenswrapper[4832]: E1002 18:23:03.222236 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:23:04 crc kubenswrapper[4832]: I1002 18:23:04.222591 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:23:04 crc kubenswrapper[4832]: E1002 18:23:04.222861 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:23:05 crc kubenswrapper[4832]: I1002 18:23:05.222379 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:23:05 crc kubenswrapper[4832]: I1002 18:23:05.222432 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:23:05 crc kubenswrapper[4832]: I1002 18:23:05.225174 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:23:05 crc kubenswrapper[4832]: E1002 18:23:05.225473 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:23:05 crc kubenswrapper[4832]: I1002 18:23:05.225684 4832 scope.go:117] "RemoveContainer" containerID="fb1426b011d18013e2707b04e8f6d79821c592635976c3a58c7ff94c0f2135c3" Oct 02 18:23:05 crc kubenswrapper[4832]: E1002 18:23:05.226176 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:23:05 crc kubenswrapper[4832]: E1002 18:23:05.226484 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:23:05 crc kubenswrapper[4832]: E1002 18:23:05.437872 4832 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:23:06 crc kubenswrapper[4832]: I1002 18:23:06.013879 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhm4n_7319e265-17de-4801-8ab7-7671dba7489d/kube-multus/1.log" Oct 02 18:23:06 crc kubenswrapper[4832]: I1002 18:23:06.014373 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhm4n" event={"ID":"7319e265-17de-4801-8ab7-7671dba7489d","Type":"ContainerStarted","Data":"3fd04f87293784afc729f1d771a6655a3c23151c34c8517161cd3a820cb2cbc5"} Oct 02 18:23:06 crc kubenswrapper[4832]: I1002 18:23:06.221804 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:23:06 crc kubenswrapper[4832]: E1002 18:23:06.222322 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:23:07 crc kubenswrapper[4832]: I1002 18:23:07.224619 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:23:07 crc kubenswrapper[4832]: I1002 18:23:07.224665 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:23:07 crc kubenswrapper[4832]: I1002 18:23:07.224627 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:23:07 crc kubenswrapper[4832]: E1002 18:23:07.224845 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:23:07 crc kubenswrapper[4832]: E1002 18:23:07.225174 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:23:07 crc kubenswrapper[4832]: E1002 18:23:07.225244 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:23:08 crc kubenswrapper[4832]: I1002 18:23:08.222050 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:23:08 crc kubenswrapper[4832]: E1002 18:23:08.222589 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:23:09 crc kubenswrapper[4832]: I1002 18:23:09.222545 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:23:09 crc kubenswrapper[4832]: I1002 18:23:09.222602 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:23:09 crc kubenswrapper[4832]: I1002 18:23:09.222580 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:23:09 crc kubenswrapper[4832]: E1002 18:23:09.222792 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 02 18:23:09 crc kubenswrapper[4832]: E1002 18:23:09.222956 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-m27c2" podUID="8adcf2d1-6a80-40e8-a94b-627c2b18443f" Oct 02 18:23:09 crc kubenswrapper[4832]: E1002 18:23:09.223141 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 02 18:23:10 crc kubenswrapper[4832]: I1002 18:23:10.222641 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:23:10 crc kubenswrapper[4832]: E1002 18:23:10.222844 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 02 18:23:11 crc kubenswrapper[4832]: I1002 18:23:11.222969 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:23:11 crc kubenswrapper[4832]: I1002 18:23:11.223022 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:23:11 crc kubenswrapper[4832]: I1002 18:23:11.223036 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:23:11 crc kubenswrapper[4832]: I1002 18:23:11.227598 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 02 18:23:11 crc kubenswrapper[4832]: I1002 18:23:11.227731 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 02 18:23:11 crc kubenswrapper[4832]: I1002 18:23:11.229388 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 02 18:23:11 crc kubenswrapper[4832]: I1002 18:23:11.229588 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 02 18:23:11 crc kubenswrapper[4832]: I1002 18:23:11.230216 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 02 18:23:11 crc kubenswrapper[4832]: I1002 18:23:11.230337 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 02 18:23:12 crc kubenswrapper[4832]: I1002 18:23:12.221881 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:23:13 crc kubenswrapper[4832]: I1002 18:23:13.560436 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:23:17 crc kubenswrapper[4832]: I1002 18:23:17.225465 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:17 crc kubenswrapper[4832]: E1002 18:23:17.225729 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:25:19.225691134 +0000 UTC m=+276.195134056 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:17 crc kubenswrapper[4832]: I1002 18:23:17.226963 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:23:17 crc kubenswrapper[4832]: I1002 18:23:17.227178 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:23:17 crc kubenswrapper[4832]: I1002 18:23:17.227397 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:23:17 crc kubenswrapper[4832]: I1002 18:23:17.227580 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:23:17 crc kubenswrapper[4832]: I1002 18:23:17.230133 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:23:17 crc kubenswrapper[4832]: I1002 18:23:17.236480 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:23:17 crc kubenswrapper[4832]: I1002 18:23:17.236582 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:23:17 crc kubenswrapper[4832]: I1002 18:23:17.236590 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:23:17 crc kubenswrapper[4832]: I1002 18:23:17.266155 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 02 18:23:17 crc kubenswrapper[4832]: I1002 18:23:17.276708 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:23:17 crc kubenswrapper[4832]: I1002 18:23:17.340993 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 02 18:23:17 crc kubenswrapper[4832]: W1002 18:23:17.589655 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-f10ffdd7d007f04fc56c224d6e4e7e773fb0510818a0a41f50812781ce97d3ad WatchSource:0}: Error finding container f10ffdd7d007f04fc56c224d6e4e7e773fb0510818a0a41f50812781ce97d3ad: Status 404 returned error can't find the container with id f10ffdd7d007f04fc56c224d6e4e7e773fb0510818a0a41f50812781ce97d3ad Oct 02 18:23:17 crc kubenswrapper[4832]: W1002 18:23:17.617832 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-afa340762d4cd23baf7188aea62c10ea522fa0b4b9b1f1e0b9dfd0d77553208c WatchSource:0}: Error finding container afa340762d4cd23baf7188aea62c10ea522fa0b4b9b1f1e0b9dfd0d77553208c: Status 404 returned error can't find the container with id afa340762d4cd23baf7188aea62c10ea522fa0b4b9b1f1e0b9dfd0d77553208c Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.061374 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c7df8b9de2a2b64cba8ed48daff0909dc86d29f84bad263b41fab3e60241bb5d"} Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.061426 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"afa340762d4cd23baf7188aea62c10ea522fa0b4b9b1f1e0b9dfd0d77553208c"} Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.064102 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"25b69949b5924a942c3d60909671809f7bf6063d1501ebc4a874bbd60291ec67"} Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.064141 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f10ffdd7d007f04fc56c224d6e4e7e773fb0510818a0a41f50812781ce97d3ad"} Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.066671 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8c9cd59c33c3bfc6e173373f40e6c246428c3c19a0dcca99c97b3ae797e38956"} Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.066747 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1309636a6195386663ac70ee91109ddc1ccd704d07db0f1088dffa56b879bbb4"} Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.066955 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.769517 4832 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.831100 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fgcrc"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.831863 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.832160 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.832740 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.833814 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-nk8bt"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.834512 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.834889 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nk8bt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.835100 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.835572 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.836197 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7q5cd"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.836458 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.836766 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-49qxn"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.837428 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.837592 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.837841 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.837920 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.840540 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w89r8"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.841387 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.845028 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.845209 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.845242 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.845531 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.849990 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.850073 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.850325 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.850329 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.850769 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.850965 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.851027 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.852019 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.852485 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.852642 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.852864 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.853152 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.853620 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.853656 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8fkn8"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.854437 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.864716 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.864777 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.864839 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.864912 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.864961 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.865007 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.865767 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.865879 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.865939 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.865899 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.866894 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.866907 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.867235 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.867628 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.867966 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.868582 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.885468 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.885503 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.885936 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.886246 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.886454 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.886638 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.886700 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.887602 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.886847 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.901437 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.904373 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.904481 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ljdjq"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.905055 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.905465 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.905761 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.906317 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.906766 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.906923 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.906995 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.907068 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.907149 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.907200 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.907282 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.907832 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.908004 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.908168 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.908372 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.908377 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.910527 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.911530 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8gdws"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.912062 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.912468 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-7ffgv"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.912902 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.913317 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.913699 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.928120 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.930477 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.930762 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.930771 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.930940 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.934045 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.934167 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.935071 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.935277 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.935314 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.935346 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.935382 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.935463 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.935491 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.936870 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.937210 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.937221 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.937466 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.937605 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.937978 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.938432 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.938506 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.938979 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.939244 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.939534 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.939575 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.939642 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.939700 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.939719 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.939783 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.939858 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.939880 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.939960 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.940016 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.940287 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.946870 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.946937 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.947066 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.947457 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.949564 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.950748 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s7lg\" (UniqueName: \"kubernetes.io/projected/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-kube-api-access-5s7lg\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.950784 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-audit-policies\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.950806 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d7bebee-b537-4cf4-b00e-1051dac6aed6-encryption-config\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.950826 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/245c924a-8033-464a-be07-6e7ebbb7d814-serving-cert\") pod \"route-controller-manager-6576b87f9c-xgwwf\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.950846 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37a9ae43-eef2-461a-a58d-c32b18ae74bc-auth-proxy-config\") pod \"machine-approver-56656f9798-w5ngm\" (UID: \"37a9ae43-eef2-461a-a58d-c32b18ae74bc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.950870 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8ll6\" (UniqueName: \"kubernetes.io/projected/34d934c3-20c9-4091-844a-e4db7482d8e0-kube-api-access-l8ll6\") pod \"machine-api-operator-5694c8668f-8fkn8\" (UID: \"34d934c3-20c9-4091-844a-e4db7482d8e0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.950891 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d7bebee-b537-4cf4-b00e-1051dac6aed6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.950916 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.950939 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pphbl\" (UniqueName: \"kubernetes.io/projected/245c924a-8033-464a-be07-6e7ebbb7d814-kube-api-access-pphbl\") pod \"route-controller-manager-6576b87f9c-xgwwf\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.950959 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7jzl\" (UniqueName: \"kubernetes.io/projected/6d7bebee-b537-4cf4-b00e-1051dac6aed6-kube-api-access-g7jzl\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.950984 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951002 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tplst\" (UniqueName: \"kubernetes.io/projected/c501c3e2-851d-452a-9fd1-0cdb21ac15e6-kube-api-access-tplst\") pod \"downloads-7954f5f757-nk8bt\" (UID: \"c501c3e2-851d-452a-9fd1-0cdb21ac15e6\") " pod="openshift-console/downloads-7954f5f757-nk8bt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951017 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951030 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951046 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951062 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-service-ca\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951086 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9vmd\" (UniqueName: \"kubernetes.io/projected/b562a645-10d9-44f7-a4fe-d3bf63ac9185-kube-api-access-t9vmd\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951109 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-oauth-config\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951128 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d7bebee-b537-4cf4-b00e-1051dac6aed6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951148 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951166 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-trusted-ca-bundle\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951181 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-service-ca-bundle\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951197 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-config\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951212 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-audit-dir\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951228 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d7bebee-b537-4cf4-b00e-1051dac6aed6-audit-dir\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951242 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-config\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951280 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34d934c3-20c9-4091-844a-e4db7482d8e0-config\") pod \"machine-api-operator-5694c8668f-8fkn8\" (UID: \"34d934c3-20c9-4091-844a-e4db7482d8e0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951298 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34d934c3-20c9-4091-844a-e4db7482d8e0-images\") pod \"machine-api-operator-5694c8668f-8fkn8\" (UID: \"34d934c3-20c9-4091-844a-e4db7482d8e0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951314 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d7bebee-b537-4cf4-b00e-1051dac6aed6-audit-policies\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951330 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-oauth-serving-cert\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951348 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c249ac-abfd-42b2-b391-5018d1695100-serving-cert\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951362 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7tfg\" (UniqueName: \"kubernetes.io/projected/a7c249ac-abfd-42b2-b391-5018d1695100-kube-api-access-l7tfg\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951385 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/34d934c3-20c9-4091-844a-e4db7482d8e0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8fkn8\" (UID: \"34d934c3-20c9-4091-844a-e4db7482d8e0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.951401 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.959593 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-serving-cert\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.959655 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6wz4\" (UniqueName: \"kubernetes.io/projected/37a9ae43-eef2-461a-a58d-c32b18ae74bc-kube-api-access-c6wz4\") pod \"machine-approver-56656f9798-w5ngm\" (UID: \"37a9ae43-eef2-461a-a58d-c32b18ae74bc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.959849 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.959873 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.960015 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.960053 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/245c924a-8033-464a-be07-6e7ebbb7d814-client-ca\") pod \"route-controller-manager-6576b87f9c-xgwwf\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.975334 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a9ae43-eef2-461a-a58d-c32b18ae74bc-config\") pod \"machine-approver-56656f9798-w5ngm\" (UID: \"37a9ae43-eef2-461a-a58d-c32b18ae74bc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.975375 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-serving-cert\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.975404 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d7bebee-b537-4cf4-b00e-1051dac6aed6-serving-cert\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.975593 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-client-ca\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.975622 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/37a9ae43-eef2-461a-a58d-c32b18ae74bc-machine-approver-tls\") pod \"machine-approver-56656f9798-w5ngm\" (UID: \"37a9ae43-eef2-461a-a58d-c32b18ae74bc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.975648 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.975673 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.975700 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.975725 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/245c924a-8033-464a-be07-6e7ebbb7d814-config\") pod \"route-controller-manager-6576b87f9c-xgwwf\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.975748 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d7bebee-b537-4cf4-b00e-1051dac6aed6-etcd-client\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.975791 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-config\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.975819 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzsm7\" (UniqueName: \"kubernetes.io/projected/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-kube-api-access-rzsm7\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.976185 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.976516 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hdqc9"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.977109 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.977319 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.977818 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.978793 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.979608 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.991599 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.991761 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tshzp"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.992241 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fgcrc"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.992348 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-tshzp" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.993027 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.993548 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.993843 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.995514 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.996528 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.997031 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.997500 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.997775 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.997802 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-k5cpd"] Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.998325 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:18 crc kubenswrapper[4832]: I1002 18:23:18.999705 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cx5vm"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.000455 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.000914 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.001229 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx5vm" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.000913 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.001864 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.003438 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.003912 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc2q"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.004206 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.004418 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.004582 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc2q" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.005066 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.007005 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9r99z"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.007688 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.008205 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jth5v"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.008654 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-jth5v" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.008698 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.008719 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9r99z" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.008919 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.009468 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.009694 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.010195 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.011554 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.012129 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.012603 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.013118 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.013942 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nk8bt"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.017289 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.018739 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.019237 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.021196 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-fjcjg"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.021728 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.022065 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.022788 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.023631 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m4vv7"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.024375 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.031066 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.032279 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8gdws"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.033438 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.033886 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.034909 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8fkn8"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.036618 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.037256 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w89r8"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.039679 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-49qxn"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.042509 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7q5cd"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.042708 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.045869 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.048344 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ljdjq"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.067825 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.077615 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.077859 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.077899 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.078790 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-audit-policies\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.078833 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d7bebee-b537-4cf4-b00e-1051dac6aed6-encryption-config\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.078865 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/245c924a-8033-464a-be07-6e7ebbb7d814-serving-cert\") pod \"route-controller-manager-6576b87f9c-xgwwf\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.078890 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37a9ae43-eef2-461a-a58d-c32b18ae74bc-auth-proxy-config\") pod \"machine-approver-56656f9798-w5ngm\" (UID: \"37a9ae43-eef2-461a-a58d-c32b18ae74bc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.078915 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8ll6\" (UniqueName: \"kubernetes.io/projected/34d934c3-20c9-4091-844a-e4db7482d8e0-kube-api-access-l8ll6\") pod \"machine-api-operator-5694c8668f-8fkn8\" (UID: \"34d934c3-20c9-4091-844a-e4db7482d8e0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.078943 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d7bebee-b537-4cf4-b00e-1051dac6aed6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.078974 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079000 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pphbl\" (UniqueName: \"kubernetes.io/projected/245c924a-8033-464a-be07-6e7ebbb7d814-kube-api-access-pphbl\") pod \"route-controller-manager-6576b87f9c-xgwwf\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079029 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7jzl\" (UniqueName: \"kubernetes.io/projected/6d7bebee-b537-4cf4-b00e-1051dac6aed6-kube-api-access-g7jzl\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079061 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tplst\" (UniqueName: \"kubernetes.io/projected/c501c3e2-851d-452a-9fd1-0cdb21ac15e6-kube-api-access-tplst\") pod \"downloads-7954f5f757-nk8bt\" (UID: \"c501c3e2-851d-452a-9fd1-0cdb21ac15e6\") " pod="openshift-console/downloads-7954f5f757-nk8bt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079090 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079122 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079149 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079178 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079208 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-service-ca\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079237 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9vmd\" (UniqueName: \"kubernetes.io/projected/b562a645-10d9-44f7-a4fe-d3bf63ac9185-kube-api-access-t9vmd\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079286 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-oauth-config\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079314 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d7bebee-b537-4cf4-b00e-1051dac6aed6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079343 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079371 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-trusted-ca-bundle\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079397 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-config\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079420 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-audit-dir\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079447 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-service-ca-bundle\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079473 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d7bebee-b537-4cf4-b00e-1051dac6aed6-audit-dir\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079503 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34d934c3-20c9-4091-844a-e4db7482d8e0-config\") pod \"machine-api-operator-5694c8668f-8fkn8\" (UID: \"34d934c3-20c9-4091-844a-e4db7482d8e0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079527 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-config\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079552 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34d934c3-20c9-4091-844a-e4db7482d8e0-images\") pod \"machine-api-operator-5694c8668f-8fkn8\" (UID: \"34d934c3-20c9-4091-844a-e4db7482d8e0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079580 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d7bebee-b537-4cf4-b00e-1051dac6aed6-audit-policies\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079606 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-oauth-serving-cert\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079644 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c249ac-abfd-42b2-b391-5018d1695100-serving-cert\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079669 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7tfg\" (UniqueName: \"kubernetes.io/projected/a7c249ac-abfd-42b2-b391-5018d1695100-kube-api-access-l7tfg\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079694 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/34d934c3-20c9-4091-844a-e4db7482d8e0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8fkn8\" (UID: \"34d934c3-20c9-4091-844a-e4db7482d8e0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079726 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-serving-cert\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079753 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6wz4\" (UniqueName: \"kubernetes.io/projected/37a9ae43-eef2-461a-a58d-c32b18ae74bc-kube-api-access-c6wz4\") pod \"machine-approver-56656f9798-w5ngm\" (UID: \"37a9ae43-eef2-461a-a58d-c32b18ae74bc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079774 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079799 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079825 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079850 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079878 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/245c924a-8033-464a-be07-6e7ebbb7d814-client-ca\") pod \"route-controller-manager-6576b87f9c-xgwwf\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079901 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a9ae43-eef2-461a-a58d-c32b18ae74bc-config\") pod \"machine-approver-56656f9798-w5ngm\" (UID: \"37a9ae43-eef2-461a-a58d-c32b18ae74bc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079926 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d7bebee-b537-4cf4-b00e-1051dac6aed6-serving-cert\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079954 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-client-ca\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.079982 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/37a9ae43-eef2-461a-a58d-c32b18ae74bc-machine-approver-tls\") pod \"machine-approver-56656f9798-w5ngm\" (UID: \"37a9ae43-eef2-461a-a58d-c32b18ae74bc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.080014 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.080039 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.080064 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-serving-cert\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.080092 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.080121 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/245c924a-8033-464a-be07-6e7ebbb7d814-config\") pod \"route-controller-manager-6576b87f9c-xgwwf\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.080141 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d7bebee-b537-4cf4-b00e-1051dac6aed6-etcd-client\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.080185 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-config\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.080213 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzsm7\" (UniqueName: \"kubernetes.io/projected/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-kube-api-access-rzsm7\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.080235 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s7lg\" (UniqueName: \"kubernetes.io/projected/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-kube-api-access-5s7lg\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.081033 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hdqc9"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.083108 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-config\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.084653 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/245c924a-8033-464a-be07-6e7ebbb7d814-client-ca\") pod \"route-controller-manager-6576b87f9c-xgwwf\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.085518 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34d934c3-20c9-4091-844a-e4db7482d8e0-images\") pod \"machine-api-operator-5694c8668f-8fkn8\" (UID: \"34d934c3-20c9-4091-844a-e4db7482d8e0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.085868 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.086147 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6d7bebee-b537-4cf4-b00e-1051dac6aed6-audit-policies\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.086713 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.088186 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-oauth-serving-cert\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.088805 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-trusted-ca-bundle\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.091478 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-audit-policies\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.092187 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a9ae43-eef2-461a-a58d-c32b18ae74bc-config\") pod \"machine-approver-56656f9798-w5ngm\" (UID: \"37a9ae43-eef2-461a-a58d-c32b18ae74bc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.093720 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tshzp"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.093758 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7ffgv"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.100231 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-client-ca\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.100584 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-audit-dir\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.100725 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6d7bebee-b537-4cf4-b00e-1051dac6aed6-audit-dir\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.102608 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-service-ca-bundle\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.103050 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6d7bebee-b537-4cf4-b00e-1051dac6aed6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.103172 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37a9ae43-eef2-461a-a58d-c32b18ae74bc-auth-proxy-config\") pod \"machine-approver-56656f9798-w5ngm\" (UID: \"37a9ae43-eef2-461a-a58d-c32b18ae74bc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.103737 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d7bebee-b537-4cf4-b00e-1051dac6aed6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.104489 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d7bebee-b537-4cf4-b00e-1051dac6aed6-serving-cert\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.106196 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.106814 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-service-ca\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.116956 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.104226 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34d934c3-20c9-4091-844a-e4db7482d8e0-config\") pod \"machine-api-operator-5694c8668f-8fkn8\" (UID: \"34d934c3-20c9-4091-844a-e4db7482d8e0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.127999 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-config\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.128305 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c249ac-abfd-42b2-b391-5018d1695100-serving-cert\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.130235 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.130308 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.130420 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/245c924a-8033-464a-be07-6e7ebbb7d814-serving-cert\") pod \"route-controller-manager-6576b87f9c-xgwwf\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.130754 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.131784 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.132112 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.132213 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-config\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.132210 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.132515 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6d7bebee-b537-4cf4-b00e-1051dac6aed6-encryption-config\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.132768 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.133335 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-oauth-config\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.133602 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/34d934c3-20c9-4091-844a-e4db7482d8e0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8fkn8\" (UID: \"34d934c3-20c9-4091-844a-e4db7482d8e0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.134053 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc2q"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.135062 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.135962 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/245c924a-8033-464a-be07-6e7ebbb7d814-config\") pod \"route-controller-manager-6576b87f9c-xgwwf\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.136108 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cx5vm"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.136242 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.136252 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.138103 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.138648 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d7bebee-b537-4cf4-b00e-1051dac6aed6-etcd-client\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.139159 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.140293 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.140772 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-serving-cert\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.141430 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8xkdn"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.141915 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.142162 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8xkdn" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.142477 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.143044 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-serving-cert\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.142695 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/37a9ae43-eef2-461a-a58d-c32b18ae74bc-machine-approver-tls\") pod \"machine-approver-56656f9798-w5ngm\" (UID: \"37a9ae43-eef2-461a-a58d-c32b18ae74bc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.144337 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.144544 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jth5v"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.144631 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.145371 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.145984 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.147165 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.148145 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.150312 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.153578 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.155225 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8xkdn"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.157121 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9r99z"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.160963 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fjcjg"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.162893 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.163043 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.164683 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m4vv7"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.165646 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tpvsc"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.166841 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.167734 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-4jc2v"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.168975 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4jc2v" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.169176 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tpvsc"] Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.173977 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.180988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181021 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2sn5\" (UniqueName: \"kubernetes.io/projected/07068ae6-441b-4211-bd1f-e219157b4bb2-kube-api-access-r2sn5\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181056 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4k7vw\" (UID: \"4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181090 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e8d46891-e775-4f73-b366-544ba67c1adf-registry-certificates\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181114 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/934c65a0-d9a8-484b-828e-b5b5db8b9575-trusted-ca\") pod \"console-operator-58897d9998-7q5cd\" (UID: \"934c65a0-d9a8-484b-828e-b5b5db8b9575\") " pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181134 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934c65a0-d9a8-484b-828e-b5b5db8b9575-config\") pod \"console-operator-58897d9998-7q5cd\" (UID: \"934c65a0-d9a8-484b-828e-b5b5db8b9575\") " pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181187 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/934c65a0-d9a8-484b-828e-b5b5db8b9575-serving-cert\") pod \"console-operator-58897d9998-7q5cd\" (UID: \"934c65a0-d9a8-484b-828e-b5b5db8b9575\") " pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181207 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71e7774-4bb0-42af-bba6-7473a9500d1f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4c986\" (UID: \"d71e7774-4bb0-42af-bba6-7473a9500d1f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181224 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-image-import-ca\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181249 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-etcd-serving-ca\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181376 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181411 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-config\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181439 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc7dc5fa-f826-40b7-b05f-7b8ed10452d4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xjgz2\" (UID: \"fc7dc5fa-f826-40b7-b05f-7b8ed10452d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181461 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d-serving-cert\") pod \"openshift-config-operator-7777fb866f-4k7vw\" (UID: \"4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181485 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bftvp\" (UniqueName: \"kubernetes.io/projected/d71e7774-4bb0-42af-bba6-7473a9500d1f-kube-api-access-bftvp\") pod \"openshift-apiserver-operator-796bbdcf4f-4c986\" (UID: \"d71e7774-4bb0-42af-bba6-7473a9500d1f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181523 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/320193d7-edcc-4e8e-8e95-8da631ea5a64-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-86vg4\" (UID: \"320193d7-edcc-4e8e-8e95-8da631ea5a64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181548 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-bound-sa-token\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181573 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/07068ae6-441b-4211-bd1f-e219157b4bb2-encryption-config\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181598 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/07068ae6-441b-4211-bd1f-e219157b4bb2-audit-dir\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181618 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnff4\" (UniqueName: \"kubernetes.io/projected/fc7dc5fa-f826-40b7-b05f-7b8ed10452d4-kube-api-access-lnff4\") pod \"cluster-samples-operator-665b6dd947-xjgz2\" (UID: \"fc7dc5fa-f826-40b7-b05f-7b8ed10452d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181645 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e8d46891-e775-4f73-b366-544ba67c1adf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181674 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klklf\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-kube-api-access-klklf\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181701 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07068ae6-441b-4211-bd1f-e219157b4bb2-serving-cert\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181750 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/07068ae6-441b-4211-bd1f-e219157b4bb2-etcd-client\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181774 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4szx\" (UniqueName: \"kubernetes.io/projected/4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d-kube-api-access-q4szx\") pod \"openshift-config-operator-7777fb866f-4k7vw\" (UID: \"4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" Oct 02 18:23:19 crc kubenswrapper[4832]: E1002 18:23:19.181960 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:19.681946888 +0000 UTC m=+156.651389760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.181831 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-audit\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.183362 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7gmd\" (UniqueName: \"kubernetes.io/projected/934c65a0-d9a8-484b-828e-b5b5db8b9575-kube-api-access-r7gmd\") pod \"console-operator-58897d9998-7q5cd\" (UID: \"934c65a0-d9a8-484b-828e-b5b5db8b9575\") " pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.183389 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/320193d7-edcc-4e8e-8e95-8da631ea5a64-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-86vg4\" (UID: \"320193d7-edcc-4e8e-8e95-8da631ea5a64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.183481 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-registry-tls\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.183515 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8d46891-e775-4f73-b366-544ba67c1adf-trusted-ca\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.183539 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e8d46891-e775-4f73-b366-544ba67c1adf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.183562 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vz7f\" (UniqueName: \"kubernetes.io/projected/320193d7-edcc-4e8e-8e95-8da631ea5a64-kube-api-access-8vz7f\") pod \"cluster-image-registry-operator-dc59b4c8b-86vg4\" (UID: \"320193d7-edcc-4e8e-8e95-8da631ea5a64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.183589 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71e7774-4bb0-42af-bba6-7473a9500d1f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4c986\" (UID: \"d71e7774-4bb0-42af-bba6-7473a9500d1f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.183615 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/07068ae6-441b-4211-bd1f-e219157b4bb2-node-pullsecrets\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.183635 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/320193d7-edcc-4e8e-8e95-8da631ea5a64-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-86vg4\" (UID: \"320193d7-edcc-4e8e-8e95-8da631ea5a64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.193970 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.213369 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.233608 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.254734 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286489 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286637 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e8d46891-e775-4f73-b366-544ba67c1adf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286666 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7860295f-4280-4d00-acc0-119ded425125-metrics-tls\") pod \"dns-default-fjcjg\" (UID: \"7860295f-4280-4d00-acc0-119ded425125\") " pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286694 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93a9647e-3a6c-463d-826d-48254cc4ea1f-config\") pod \"kube-controller-manager-operator-78b949d7b-qcz87\" (UID: \"93a9647e-3a6c-463d-826d-48254cc4ea1f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286726 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klklf\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-kube-api-access-klklf\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286743 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07068ae6-441b-4211-bd1f-e219157b4bb2-serving-cert\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286761 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4a81f0de-a11b-4652-a0fd-87468ce2e04d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-52bdc\" (UID: \"4a81f0de-a11b-4652-a0fd-87468ce2e04d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286784 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/07068ae6-441b-4211-bd1f-e219157b4bb2-etcd-client\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286808 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4szx\" (UniqueName: \"kubernetes.io/projected/4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d-kube-api-access-q4szx\") pod \"openshift-config-operator-7777fb866f-4k7vw\" (UID: \"4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286844 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-audit\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286861 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7gmd\" (UniqueName: \"kubernetes.io/projected/934c65a0-d9a8-484b-828e-b5b5db8b9575-kube-api-access-r7gmd\") pod \"console-operator-58897d9998-7q5cd\" (UID: \"934c65a0-d9a8-484b-828e-b5b5db8b9575\") " pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/320193d7-edcc-4e8e-8e95-8da631ea5a64-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-86vg4\" (UID: \"320193d7-edcc-4e8e-8e95-8da631ea5a64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286896 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-registry-tls\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286911 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8d46891-e775-4f73-b366-544ba67c1adf-trusted-ca\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286932 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb287078-753a-4a35-b491-25ccc9c614a3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-prv7d\" (UID: \"cb287078-753a-4a35-b491-25ccc9c614a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286966 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e8d46891-e775-4f73-b366-544ba67c1adf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.286986 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vz7f\" (UniqueName: \"kubernetes.io/projected/320193d7-edcc-4e8e-8e95-8da631ea5a64-kube-api-access-8vz7f\") pod \"cluster-image-registry-operator-dc59b4c8b-86vg4\" (UID: \"320193d7-edcc-4e8e-8e95-8da631ea5a64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287006 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71e7774-4bb0-42af-bba6-7473a9500d1f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4c986\" (UID: \"d71e7774-4bb0-42af-bba6-7473a9500d1f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287026 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb287078-753a-4a35-b491-25ccc9c614a3-config\") pod \"kube-apiserver-operator-766d6c64bb-prv7d\" (UID: \"cb287078-753a-4a35-b491-25ccc9c614a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287064 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/07068ae6-441b-4211-bd1f-e219157b4bb2-node-pullsecrets\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287090 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/320193d7-edcc-4e8e-8e95-8da631ea5a64-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-86vg4\" (UID: \"320193d7-edcc-4e8e-8e95-8da631ea5a64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287108 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4a81f0de-a11b-4652-a0fd-87468ce2e04d-proxy-tls\") pod \"machine-config-controller-84d6567774-52bdc\" (UID: \"4a81f0de-a11b-4652-a0fd-87468ce2e04d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287144 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287161 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7860295f-4280-4d00-acc0-119ded425125-config-volume\") pod \"dns-default-fjcjg\" (UID: \"7860295f-4280-4d00-acc0-119ded425125\") " pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287179 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2sn5\" (UniqueName: \"kubernetes.io/projected/07068ae6-441b-4211-bd1f-e219157b4bb2-kube-api-access-r2sn5\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287202 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4k7vw\" (UID: \"4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287223 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgwbr\" (UniqueName: \"kubernetes.io/projected/7860295f-4280-4d00-acc0-119ded425125-kube-api-access-sgwbr\") pod \"dns-default-fjcjg\" (UID: \"7860295f-4280-4d00-acc0-119ded425125\") " pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287354 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e8d46891-e775-4f73-b366-544ba67c1adf-registry-certificates\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287428 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/934c65a0-d9a8-484b-828e-b5b5db8b9575-trusted-ca\") pod \"console-operator-58897d9998-7q5cd\" (UID: \"934c65a0-d9a8-484b-828e-b5b5db8b9575\") " pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287446 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93a9647e-3a6c-463d-826d-48254cc4ea1f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qcz87\" (UID: \"93a9647e-3a6c-463d-826d-48254cc4ea1f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287468 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71e7774-4bb0-42af-bba6-7473a9500d1f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4c986\" (UID: \"d71e7774-4bb0-42af-bba6-7473a9500d1f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287490 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934c65a0-d9a8-484b-828e-b5b5db8b9575-config\") pod \"console-operator-58897d9998-7q5cd\" (UID: \"934c65a0-d9a8-484b-828e-b5b5db8b9575\") " pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287509 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/934c65a0-d9a8-484b-828e-b5b5db8b9575-serving-cert\") pod \"console-operator-58897d9998-7q5cd\" (UID: \"934c65a0-d9a8-484b-828e-b5b5db8b9575\") " pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287538 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npcs8\" (UniqueName: \"kubernetes.io/projected/4a81f0de-a11b-4652-a0fd-87468ce2e04d-kube-api-access-npcs8\") pod \"machine-config-controller-84d6567774-52bdc\" (UID: \"4a81f0de-a11b-4652-a0fd-87468ce2e04d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287554 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93a9647e-3a6c-463d-826d-48254cc4ea1f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qcz87\" (UID: \"93a9647e-3a6c-463d-826d-48254cc4ea1f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287579 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-image-import-ca\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287600 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e8b95c95-e0ac-476b-9a52-1e22bb23a540-signing-cabundle\") pod \"service-ca-9c57cc56f-jth5v\" (UID: \"e8b95c95-e0ac-476b-9a52-1e22bb23a540\") " pod="openshift-service-ca/service-ca-9c57cc56f-jth5v" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287625 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5859\" (UniqueName: \"kubernetes.io/projected/e8b95c95-e0ac-476b-9a52-1e22bb23a540-kube-api-access-r5859\") pod \"service-ca-9c57cc56f-jth5v\" (UID: \"e8b95c95-e0ac-476b-9a52-1e22bb23a540\") " pod="openshift-service-ca/service-ca-9c57cc56f-jth5v" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287658 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-etcd-serving-ca\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287681 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e8b95c95-e0ac-476b-9a52-1e22bb23a540-signing-key\") pod \"service-ca-9c57cc56f-jth5v\" (UID: \"e8b95c95-e0ac-476b-9a52-1e22bb23a540\") " pod="openshift-service-ca/service-ca-9c57cc56f-jth5v" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287732 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-config\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287756 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb287078-753a-4a35-b491-25ccc9c614a3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-prv7d\" (UID: \"cb287078-753a-4a35-b491-25ccc9c614a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287778 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc7dc5fa-f826-40b7-b05f-7b8ed10452d4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xjgz2\" (UID: \"fc7dc5fa-f826-40b7-b05f-7b8ed10452d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287798 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d-serving-cert\") pod \"openshift-config-operator-7777fb866f-4k7vw\" (UID: \"4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287824 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bftvp\" (UniqueName: \"kubernetes.io/projected/d71e7774-4bb0-42af-bba6-7473a9500d1f-kube-api-access-bftvp\") pod \"openshift-apiserver-operator-796bbdcf4f-4c986\" (UID: \"d71e7774-4bb0-42af-bba6-7473a9500d1f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287840 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/320193d7-edcc-4e8e-8e95-8da631ea5a64-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-86vg4\" (UID: \"320193d7-edcc-4e8e-8e95-8da631ea5a64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287866 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/07068ae6-441b-4211-bd1f-e219157b4bb2-encryption-config\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287882 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/07068ae6-441b-4211-bd1f-e219157b4bb2-audit-dir\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287904 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnff4\" (UniqueName: \"kubernetes.io/projected/fc7dc5fa-f826-40b7-b05f-7b8ed10452d4-kube-api-access-lnff4\") pod \"cluster-samples-operator-665b6dd947-xjgz2\" (UID: \"fc7dc5fa-f826-40b7-b05f-7b8ed10452d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.287928 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-bound-sa-token\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.288493 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e8d46891-e775-4f73-b366-544ba67c1adf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.289138 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d71e7774-4bb0-42af-bba6-7473a9500d1f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4c986\" (UID: \"d71e7774-4bb0-42af-bba6-7473a9500d1f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.289287 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-audit\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.289403 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/07068ae6-441b-4211-bd1f-e219157b4bb2-node-pullsecrets\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: E1002 18:23:19.290784 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:19.79076662 +0000 UTC m=+156.760209492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.290843 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/320193d7-edcc-4e8e-8e95-8da631ea5a64-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-86vg4\" (UID: \"320193d7-edcc-4e8e-8e95-8da631ea5a64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.291424 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/07068ae6-441b-4211-bd1f-e219157b4bb2-etcd-client\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.292436 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-config\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.292577 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-image-import-ca\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.292760 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e8d46891-e775-4f73-b366-544ba67c1adf-registry-certificates\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.293128 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/934c65a0-d9a8-484b-828e-b5b5db8b9575-trusted-ca\") pod \"console-operator-58897d9998-7q5cd\" (UID: \"934c65a0-d9a8-484b-828e-b5b5db8b9575\") " pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.293444 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-etcd-serving-ca\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.293790 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4k7vw\" (UID: \"4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.293802 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8d46891-e775-4f73-b366-544ba67c1adf-trusted-ca\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.294069 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/07068ae6-441b-4211-bd1f-e219157b4bb2-audit-dir\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.294422 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/934c65a0-d9a8-484b-828e-b5b5db8b9575-config\") pod \"console-operator-58897d9998-7q5cd\" (UID: \"934c65a0-d9a8-484b-828e-b5b5db8b9575\") " pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.294552 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07068ae6-441b-4211-bd1f-e219157b4bb2-trusted-ca-bundle\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.295016 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fc7dc5fa-f826-40b7-b05f-7b8ed10452d4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xjgz2\" (UID: \"fc7dc5fa-f826-40b7-b05f-7b8ed10452d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.295914 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/320193d7-edcc-4e8e-8e95-8da631ea5a64-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-86vg4\" (UID: \"320193d7-edcc-4e8e-8e95-8da631ea5a64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.296087 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-registry-tls\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.296497 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07068ae6-441b-4211-bd1f-e219157b4bb2-serving-cert\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.296648 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/934c65a0-d9a8-484b-828e-b5b5db8b9575-serving-cert\") pod \"console-operator-58897d9998-7q5cd\" (UID: \"934c65a0-d9a8-484b-828e-b5b5db8b9575\") " pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.296865 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d-serving-cert\") pod \"openshift-config-operator-7777fb866f-4k7vw\" (UID: \"4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.296939 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e8d46891-e775-4f73-b366-544ba67c1adf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.297478 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.297953 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/07068ae6-441b-4211-bd1f-e219157b4bb2-encryption-config\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.299069 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d71e7774-4bb0-42af-bba6-7473a9500d1f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4c986\" (UID: \"d71e7774-4bb0-42af-bba6-7473a9500d1f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.314476 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.334016 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.354544 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.374139 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389038 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb287078-753a-4a35-b491-25ccc9c614a3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-prv7d\" (UID: \"cb287078-753a-4a35-b491-25ccc9c614a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389115 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb287078-753a-4a35-b491-25ccc9c614a3-config\") pod \"kube-apiserver-operator-766d6c64bb-prv7d\" (UID: \"cb287078-753a-4a35-b491-25ccc9c614a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389159 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4a81f0de-a11b-4652-a0fd-87468ce2e04d-proxy-tls\") pod \"machine-config-controller-84d6567774-52bdc\" (UID: \"4a81f0de-a11b-4652-a0fd-87468ce2e04d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389196 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7860295f-4280-4d00-acc0-119ded425125-config-volume\") pod \"dns-default-fjcjg\" (UID: \"7860295f-4280-4d00-acc0-119ded425125\") " pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389243 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgwbr\" (UniqueName: \"kubernetes.io/projected/7860295f-4280-4d00-acc0-119ded425125-kube-api-access-sgwbr\") pod \"dns-default-fjcjg\" (UID: \"7860295f-4280-4d00-acc0-119ded425125\") " pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389332 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93a9647e-3a6c-463d-826d-48254cc4ea1f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qcz87\" (UID: \"93a9647e-3a6c-463d-826d-48254cc4ea1f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389376 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npcs8\" (UniqueName: \"kubernetes.io/projected/4a81f0de-a11b-4652-a0fd-87468ce2e04d-kube-api-access-npcs8\") pod \"machine-config-controller-84d6567774-52bdc\" (UID: \"4a81f0de-a11b-4652-a0fd-87468ce2e04d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389411 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93a9647e-3a6c-463d-826d-48254cc4ea1f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qcz87\" (UID: \"93a9647e-3a6c-463d-826d-48254cc4ea1f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389447 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e8b95c95-e0ac-476b-9a52-1e22bb23a540-signing-cabundle\") pod \"service-ca-9c57cc56f-jth5v\" (UID: \"e8b95c95-e0ac-476b-9a52-1e22bb23a540\") " pod="openshift-service-ca/service-ca-9c57cc56f-jth5v" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389480 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5859\" (UniqueName: \"kubernetes.io/projected/e8b95c95-e0ac-476b-9a52-1e22bb23a540-kube-api-access-r5859\") pod \"service-ca-9c57cc56f-jth5v\" (UID: \"e8b95c95-e0ac-476b-9a52-1e22bb23a540\") " pod="openshift-service-ca/service-ca-9c57cc56f-jth5v" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389512 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e8b95c95-e0ac-476b-9a52-1e22bb23a540-signing-key\") pod \"service-ca-9c57cc56f-jth5v\" (UID: \"e8b95c95-e0ac-476b-9a52-1e22bb23a540\") " pod="openshift-service-ca/service-ca-9c57cc56f-jth5v" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389578 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389614 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb287078-753a-4a35-b491-25ccc9c614a3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-prv7d\" (UID: \"cb287078-753a-4a35-b491-25ccc9c614a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389713 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7860295f-4280-4d00-acc0-119ded425125-metrics-tls\") pod \"dns-default-fjcjg\" (UID: \"7860295f-4280-4d00-acc0-119ded425125\") " pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389770 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93a9647e-3a6c-463d-826d-48254cc4ea1f-config\") pod \"kube-controller-manager-operator-78b949d7b-qcz87\" (UID: \"93a9647e-3a6c-463d-826d-48254cc4ea1f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87" Oct 02 18:23:19 crc kubenswrapper[4832]: E1002 18:23:19.389844 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:19.889828246 +0000 UTC m=+156.859271118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.389887 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4a81f0de-a11b-4652-a0fd-87468ce2e04d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-52bdc\" (UID: \"4a81f0de-a11b-4652-a0fd-87468ce2e04d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.391082 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4a81f0de-a11b-4652-a0fd-87468ce2e04d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-52bdc\" (UID: \"4a81f0de-a11b-4652-a0fd-87468ce2e04d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.394205 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.414459 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.426252 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb287078-753a-4a35-b491-25ccc9c614a3-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-prv7d\" (UID: \"cb287078-753a-4a35-b491-25ccc9c614a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.435539 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.462311 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.475644 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.480232 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb287078-753a-4a35-b491-25ccc9c614a3-config\") pod \"kube-apiserver-operator-766d6c64bb-prv7d\" (UID: \"cb287078-753a-4a35-b491-25ccc9c614a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.490474 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:19 crc kubenswrapper[4832]: E1002 18:23:19.490696 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:19.990664528 +0000 UTC m=+156.960107440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.491199 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: E1002 18:23:19.491682 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:19.991665588 +0000 UTC m=+156.961108500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.495293 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.514466 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.534994 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.554784 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.575128 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.592009 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:19 crc kubenswrapper[4832]: E1002 18:23:19.592197 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.092175081 +0000 UTC m=+157.061617973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.592362 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: E1002 18:23:19.592703 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.092696317 +0000 UTC m=+157.062139189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.594679 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.614902 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.635442 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.654885 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.674901 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.693572 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:19 crc kubenswrapper[4832]: E1002 18:23:19.693978 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.193965432 +0000 UTC m=+157.163408304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.695371 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.714172 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.737673 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.755432 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.774753 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.794583 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.795413 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: E1002 18:23:19.795843 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.295813845 +0000 UTC m=+157.265256747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.814180 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.835289 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.854155 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.875498 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.895660 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.896255 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:19 crc kubenswrapper[4832]: E1002 18:23:19.896382 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.396362588 +0000 UTC m=+157.365805470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.896716 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: E1002 18:23:19.897105 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.39709402 +0000 UTC m=+157.366536902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.914581 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.926032 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93a9647e-3a6c-463d-826d-48254cc4ea1f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-qcz87\" (UID: \"93a9647e-3a6c-463d-826d-48254cc4ea1f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.935415 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.954384 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.961835 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93a9647e-3a6c-463d-826d-48254cc4ea1f-config\") pod \"kube-controller-manager-operator-78b949d7b-qcz87\" (UID: \"93a9647e-3a6c-463d-826d-48254cc4ea1f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.973868 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.993819 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.997563 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:19 crc kubenswrapper[4832]: E1002 18:23:19.997849 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.497814319 +0000 UTC m=+157.467257241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:19 crc kubenswrapper[4832]: I1002 18:23:19.998297 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:19 crc kubenswrapper[4832]: E1002 18:23:19.998728 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.498710767 +0000 UTC m=+157.468153679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.013485 4832 request.go:700] Waited for 1.008714669s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0 Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.015798 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.034759 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.044988 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4a81f0de-a11b-4652-a0fd-87468ce2e04d-proxy-tls\") pod \"machine-config-controller-84d6567774-52bdc\" (UID: \"4a81f0de-a11b-4652-a0fd-87468ce2e04d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.055054 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.075140 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.095408 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.099424 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.099742 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.599700463 +0000 UTC m=+157.569143375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.114693 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.135043 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.154914 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.165308 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e8b95c95-e0ac-476b-9a52-1e22bb23a540-signing-key\") pod \"service-ca-9c57cc56f-jth5v\" (UID: \"e8b95c95-e0ac-476b-9a52-1e22bb23a540\") " pod="openshift-service-ca/service-ca-9c57cc56f-jth5v" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.175315 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.181391 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e8b95c95-e0ac-476b-9a52-1e22bb23a540-signing-cabundle\") pod \"service-ca-9c57cc56f-jth5v\" (UID: \"e8b95c95-e0ac-476b-9a52-1e22bb23a540\") " pod="openshift-service-ca/service-ca-9c57cc56f-jth5v" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.195429 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.200855 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.201432 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.701402992 +0000 UTC m=+157.670845924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.214790 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.234863 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.255437 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.276592 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.294879 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.302592 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.302900 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.802867473 +0000 UTC m=+157.772310385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.303085 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.303445 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.803430881 +0000 UTC m=+157.772873763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.315567 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.335083 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.354419 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.374962 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.389457 4832 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.389566 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7860295f-4280-4d00-acc0-119ded425125-config-volume podName:7860295f-4280-4d00-acc0-119ded425125 nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.889536153 +0000 UTC m=+157.858979055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7860295f-4280-4d00-acc0-119ded425125-config-volume") pod "dns-default-fjcjg" (UID: "7860295f-4280-4d00-acc0-119ded425125") : failed to sync configmap cache: timed out waiting for the condition Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.389916 4832 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.390005 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7860295f-4280-4d00-acc0-119ded425125-metrics-tls podName:7860295f-4280-4d00-acc0-119ded425125 nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.889983667 +0000 UTC m=+157.859426579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7860295f-4280-4d00-acc0-119ded425125-metrics-tls") pod "dns-default-fjcjg" (UID: "7860295f-4280-4d00-acc0-119ded425125") : failed to sync secret cache: timed out waiting for the condition Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.395025 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.404402 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.404556 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.904532001 +0000 UTC m=+157.873974903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.404815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.405362 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:20.905337327 +0000 UTC m=+157.874780229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.415193 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.435477 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.455657 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.475298 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.494391 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.506505 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.506791 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.006738976 +0000 UTC m=+157.976181898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.507549 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.508163 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.0081437 +0000 UTC m=+157.977586612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.516026 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.534850 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.556675 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.575762 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.595170 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.608952 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.609326 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.10925177 +0000 UTC m=+158.078694672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.609623 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.610126 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.110103877 +0000 UTC m=+158.079546789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.614995 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.635229 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.655553 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.684684 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.697707 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.711480 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.711724 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.211691612 +0000 UTC m=+158.181134494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.712005 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.712624 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.21260129 +0000 UTC m=+158.182044202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.714697 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.756792 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s7lg\" (UniqueName: \"kubernetes.io/projected/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-kube-api-access-5s7lg\") pod \"oauth-openshift-558db77b4-fgcrc\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.777330 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pphbl\" (UniqueName: \"kubernetes.io/projected/245c924a-8033-464a-be07-6e7ebbb7d814-kube-api-access-pphbl\") pod \"route-controller-manager-6576b87f9c-xgwwf\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.804072 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7jzl\" (UniqueName: \"kubernetes.io/projected/6d7bebee-b537-4cf4-b00e-1051dac6aed6-kube-api-access-g7jzl\") pod \"apiserver-7bbb656c7d-vw26f\" (UID: \"6d7bebee-b537-4cf4-b00e-1051dac6aed6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.813202 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.813382 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.313350249 +0000 UTC m=+158.282793151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.814233 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.814954 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.314914418 +0000 UTC m=+158.284357340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.815321 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tplst\" (UniqueName: \"kubernetes.io/projected/c501c3e2-851d-452a-9fd1-0cdb21ac15e6-kube-api-access-tplst\") pod \"downloads-7954f5f757-nk8bt\" (UID: \"c501c3e2-851d-452a-9fd1-0cdb21ac15e6\") " pod="openshift-console/downloads-7954f5f757-nk8bt" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.834742 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8ll6\" (UniqueName: \"kubernetes.io/projected/34d934c3-20c9-4091-844a-e4db7482d8e0-kube-api-access-l8ll6\") pod \"machine-api-operator-5694c8668f-8fkn8\" (UID: \"34d934c3-20c9-4091-844a-e4db7482d8e0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.843053 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.846798 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.858037 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9vmd\" (UniqueName: \"kubernetes.io/projected/b562a645-10d9-44f7-a4fe-d3bf63ac9185-kube-api-access-t9vmd\") pod \"console-f9d7485db-7ffgv\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.876067 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7tfg\" (UniqueName: \"kubernetes.io/projected/a7c249ac-abfd-42b2-b391-5018d1695100-kube-api-access-l7tfg\") pod \"controller-manager-879f6c89f-w89r8\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.893971 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6wz4\" (UniqueName: \"kubernetes.io/projected/37a9ae43-eef2-461a-a58d-c32b18ae74bc-kube-api-access-c6wz4\") pod \"machine-approver-56656f9798-w5ngm\" (UID: \"37a9ae43-eef2-461a-a58d-c32b18ae74bc\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.916443 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.916684 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.917200 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.417159113 +0000 UTC m=+158.386601995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.917777 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzsm7\" (UniqueName: \"kubernetes.io/projected/ae2a13a2-cd3b-40ea-bb53-edbd0449781b-kube-api-access-rzsm7\") pod \"authentication-operator-69f744f599-49qxn\" (UID: \"ae2a13a2-cd3b-40ea-bb53-edbd0449781b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.922554 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.922752 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7860295f-4280-4d00-acc0-119ded425125-metrics-tls\") pod \"dns-default-fjcjg\" (UID: \"7860295f-4280-4d00-acc0-119ded425125\") " pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.923012 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7860295f-4280-4d00-acc0-119ded425125-config-volume\") pod \"dns-default-fjcjg\" (UID: \"7860295f-4280-4d00-acc0-119ded425125\") " pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:20 crc kubenswrapper[4832]: E1002 18:23:20.923349 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.423207563 +0000 UTC m=+158.392650465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.924616 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7860295f-4280-4d00-acc0-119ded425125-config-volume\") pod \"dns-default-fjcjg\" (UID: \"7860295f-4280-4d00-acc0-119ded425125\") " pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.927842 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7860295f-4280-4d00-acc0-119ded425125-metrics-tls\") pod \"dns-default-fjcjg\" (UID: \"7860295f-4280-4d00-acc0-119ded425125\") " pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.935709 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.938688 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.957439 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.963225 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.977988 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.980601 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" Oct 02 18:23:20 crc kubenswrapper[4832]: I1002 18:23:20.997875 4832 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 02 18:23:21 crc kubenswrapper[4832]: W1002 18:23:21.011334 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a9ae43_eef2_461a_a58d_c32b18ae74bc.slice/crio-6d69745abc54e95b80b717b9f76a40c31c54dfd63bd3bf3278a890b7b74e9a66 WatchSource:0}: Error finding container 6d69745abc54e95b80b717b9f76a40c31c54dfd63bd3bf3278a890b7b74e9a66: Status 404 returned error can't find the container with id 6d69745abc54e95b80b717b9f76a40c31c54dfd63bd3bf3278a890b7b74e9a66 Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.017935 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.022962 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nk8bt" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.024783 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:21 crc kubenswrapper[4832]: E1002 18:23:21.025128 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.525062866 +0000 UTC m=+158.494505748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.025376 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:21 crc kubenswrapper[4832]: E1002 18:23:21.025821 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.52580047 +0000 UTC m=+158.495243342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.030785 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.032452 4832 request.go:700] Waited for 1.865281232s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.034010 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.059136 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.074816 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.085641 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" event={"ID":"37a9ae43-eef2-461a-a58d-c32b18ae74bc","Type":"ContainerStarted","Data":"6d69745abc54e95b80b717b9f76a40c31c54dfd63bd3bf3278a890b7b74e9a66"} Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.095137 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.095177 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.113969 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8fkn8"] Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.126543 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:21 crc kubenswrapper[4832]: E1002 18:23:21.126891 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.626876619 +0000 UTC m=+158.596319491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.134205 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.175116 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4szx\" (UniqueName: \"kubernetes.io/projected/4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d-kube-api-access-q4szx\") pod \"openshift-config-operator-7777fb866f-4k7vw\" (UID: \"4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.188219 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fgcrc"] Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.194346 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-bound-sa-token\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.213435 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vz7f\" (UniqueName: \"kubernetes.io/projected/320193d7-edcc-4e8e-8e95-8da631ea5a64-kube-api-access-8vz7f\") pod \"cluster-image-registry-operator-dc59b4c8b-86vg4\" (UID: \"320193d7-edcc-4e8e-8e95-8da631ea5a64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.216164 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7ffgv"] Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.227432 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:21 crc kubenswrapper[4832]: E1002 18:23:21.227765 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.727753192 +0000 UTC m=+158.697196064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.241124 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7gmd\" (UniqueName: \"kubernetes.io/projected/934c65a0-d9a8-484b-828e-b5b5db8b9575-kube-api-access-r7gmd\") pod \"console-operator-58897d9998-7q5cd\" (UID: \"934c65a0-d9a8-484b-828e-b5b5db8b9575\") " pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.270773 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klklf\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-kube-api-access-klklf\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.281589 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nk8bt"] Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.287122 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2sn5\" (UniqueName: \"kubernetes.io/projected/07068ae6-441b-4211-bd1f-e219157b4bb2-kube-api-access-r2sn5\") pod \"apiserver-76f77b778f-ljdjq\" (UID: \"07068ae6-441b-4211-bd1f-e219157b4bb2\") " pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.294803 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bftvp\" (UniqueName: \"kubernetes.io/projected/d71e7774-4bb0-42af-bba6-7473a9500d1f-kube-api-access-bftvp\") pod \"openshift-apiserver-operator-796bbdcf4f-4c986\" (UID: \"d71e7774-4bb0-42af-bba6-7473a9500d1f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.311985 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f"] Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.316883 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/320193d7-edcc-4e8e-8e95-8da631ea5a64-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-86vg4\" (UID: \"320193d7-edcc-4e8e-8e95-8da631ea5a64\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.328618 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:21 crc kubenswrapper[4832]: E1002 18:23:21.329426 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.829406029 +0000 UTC m=+158.798848901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.329718 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnff4\" (UniqueName: \"kubernetes.io/projected/fc7dc5fa-f826-40b7-b05f-7b8ed10452d4-kube-api-access-lnff4\") pod \"cluster-samples-operator-665b6dd947-xjgz2\" (UID: \"fc7dc5fa-f826-40b7-b05f-7b8ed10452d4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2" Oct 02 18:23:21 crc kubenswrapper[4832]: W1002 18:23:21.330927 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d7bebee_b537_4cf4_b00e_1051dac6aed6.slice/crio-48106a7c7c23bf5f13ce6994771794bf3fe7d8c5bba78449c4db4f9b8fe151d5 WatchSource:0}: Error finding container 48106a7c7c23bf5f13ce6994771794bf3fe7d8c5bba78449c4db4f9b8fe151d5: Status 404 returned error can't find the container with id 48106a7c7c23bf5f13ce6994771794bf3fe7d8c5bba78449c4db4f9b8fe151d5 Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.354023 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb287078-753a-4a35-b491-25ccc9c614a3-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-prv7d\" (UID: \"cb287078-753a-4a35-b491-25ccc9c614a3\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.377672 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.391474 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgwbr\" (UniqueName: \"kubernetes.io/projected/7860295f-4280-4d00-acc0-119ded425125-kube-api-access-sgwbr\") pod \"dns-default-fjcjg\" (UID: \"7860295f-4280-4d00-acc0-119ded425125\") " pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.397811 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npcs8\" (UniqueName: \"kubernetes.io/projected/4a81f0de-a11b-4652-a0fd-87468ce2e04d-kube-api-access-npcs8\") pod \"machine-config-controller-84d6567774-52bdc\" (UID: \"4a81f0de-a11b-4652-a0fd-87468ce2e04d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.406412 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.411405 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93a9647e-3a6c-463d-826d-48254cc4ea1f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-qcz87\" (UID: \"93a9647e-3a6c-463d-826d-48254cc4ea1f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.421621 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w89r8"] Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.425648 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.431717 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:21 crc kubenswrapper[4832]: E1002 18:23:21.432032 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:21.932015397 +0000 UTC m=+158.901458259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.439589 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5859\" (UniqueName: \"kubernetes.io/projected/e8b95c95-e0ac-476b-9a52-1e22bb23a540-kube-api-access-r5859\") pod \"service-ca-9c57cc56f-jth5v\" (UID: \"e8b95c95-e0ac-476b-9a52-1e22bb23a540\") " pod="openshift-service-ca/service-ca-9c57cc56f-jth5v" Oct 02 18:23:21 crc kubenswrapper[4832]: W1002 18:23:21.444778 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7c249ac_abfd_42b2_b391_5018d1695100.slice/crio-177f6bccad979fc0aa345a344515ec17c1c7d343da0e8ac5a7e1f8338b9e1483 WatchSource:0}: Error finding container 177f6bccad979fc0aa345a344515ec17c1c7d343da0e8ac5a7e1f8338b9e1483: Status 404 returned error can't find the container with id 177f6bccad979fc0aa345a344515ec17c1c7d343da0e8ac5a7e1f8338b9e1483 Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.448237 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.459537 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-49qxn"] Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.473527 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.523195 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.534592 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.534857 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1fc3207-2d0d-48f2-aa03-7136bcd5823f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-smk65\" (UID: \"e1fc3207-2d0d-48f2-aa03-7136bcd5823f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.534882 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b014bcbd-189a-4310-9105-00e0fd0f624b-etcd-service-ca\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.534902 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6eebdfd-4211-4035-bd7e-3a689cf6528c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d6wcq\" (UID: \"f6eebdfd-4211-4035-bd7e-3a689cf6528c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.534919 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-plugins-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.534937 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1fc3207-2d0d-48f2-aa03-7136bcd5823f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-smk65\" (UID: \"e1fc3207-2d0d-48f2-aa03-7136bcd5823f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.534954 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/665e1554-20bb-4238-8e92-b7eb966fddc7-node-bootstrap-token\") pod \"machine-config-server-4jc2v\" (UID: \"665e1554-20bb-4238-8e92-b7eb966fddc7\") " pod="openshift-machine-config-operator/machine-config-server-4jc2v" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.534970 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-registration-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.534985 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/563b1a80-432b-4eb1-b3d5-cf2843736168-metrics-tls\") pod \"dns-operator-744455d44c-tshzp\" (UID: \"563b1a80-432b-4eb1-b3d5-cf2843736168\") " pod="openshift-dns-operator/dns-operator-744455d44c-tshzp" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535001 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plw7h\" (UniqueName: \"kubernetes.io/projected/03b5a76f-8ef9-4b85-841f-2c4a3011d71b-kube-api-access-plw7h\") pod \"machine-config-operator-74547568cd-glkgm\" (UID: \"03b5a76f-8ef9-4b85-841f-2c4a3011d71b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535023 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-k6rmw\" (UID: \"0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535045 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/665e1554-20bb-4238-8e92-b7eb966fddc7-certs\") pod \"machine-config-server-4jc2v\" (UID: \"665e1554-20bb-4238-8e92-b7eb966fddc7\") " pod="openshift-machine-config-operator/machine-config-server-4jc2v" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535061 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/79047375-b11f-4aa6-ae05-1bf9981b7da7-srv-cert\") pod \"olm-operator-6b444d44fb-tfxbb\" (UID: \"79047375-b11f-4aa6-ae05-1bf9981b7da7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535075 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldk7b\" (UniqueName: \"kubernetes.io/projected/563b1a80-432b-4eb1-b3d5-cf2843736168-kube-api-access-ldk7b\") pod \"dns-operator-744455d44c-tshzp\" (UID: \"563b1a80-432b-4eb1-b3d5-cf2843736168\") " pod="openshift-dns-operator/dns-operator-744455d44c-tshzp" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535094 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwr2c\" (UniqueName: \"kubernetes.io/projected/cca10a2e-3045-4696-9f52-263ff39d8101-kube-api-access-mwr2c\") pod \"ingress-operator-5b745b69d9-62fbw\" (UID: \"cca10a2e-3045-4696-9f52-263ff39d8101\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535109 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/03b5a76f-8ef9-4b85-841f-2c4a3011d71b-images\") pod \"machine-config-operator-74547568cd-glkgm\" (UID: \"03b5a76f-8ef9-4b85-841f-2c4a3011d71b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535123 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcrps\" (UniqueName: \"kubernetes.io/projected/f6eebdfd-4211-4035-bd7e-3a689cf6528c-kube-api-access-rcrps\") pod \"kube-storage-version-migrator-operator-b67b599dd-d6wcq\" (UID: \"f6eebdfd-4211-4035-bd7e-3a689cf6528c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535140 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e1ccc88e-b013-4c52-92b1-6e6462492c3c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m4vv7\" (UID: \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535158 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/559b884d-1de9-433d-96a6-fc1f8b3622d4-serving-cert\") pod \"service-ca-operator-777779d784-4ncqx\" (UID: \"559b884d-1de9-433d-96a6-fc1f8b3622d4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535185 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m7r2\" (UniqueName: \"kubernetes.io/projected/559b884d-1de9-433d-96a6-fc1f8b3622d4-kube-api-access-4m7r2\") pod \"service-ca-operator-777779d784-4ncqx\" (UID: \"559b884d-1de9-433d-96a6-fc1f8b3622d4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535199 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krmn2\" (UniqueName: \"kubernetes.io/projected/e291aef6-bbde-41a5-9981-96b992547e03-kube-api-access-krmn2\") pod \"collect-profiles-29323815-xbthn\" (UID: \"e291aef6-bbde-41a5-9981-96b992547e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535225 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e291aef6-bbde-41a5-9981-96b992547e03-secret-volume\") pod \"collect-profiles-29323815-xbthn\" (UID: \"e291aef6-bbde-41a5-9981-96b992547e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535241 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g78sr\" (UniqueName: \"kubernetes.io/projected/5845d078-31dd-48b9-a1f8-b3cde570370c-kube-api-access-g78sr\") pod \"catalog-operator-68c6474976-v6kwp\" (UID: \"5845d078-31dd-48b9-a1f8-b3cde570370c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535283 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29c6z\" (UniqueName: \"kubernetes.io/projected/b014bcbd-189a-4310-9105-00e0fd0f624b-kube-api-access-29c6z\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535301 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzmcf\" (UniqueName: \"kubernetes.io/projected/3629d037-0605-455a-8846-b96b543f8ee6-kube-api-access-jzmcf\") pod \"packageserver-d55dfcdfc-7z6z2\" (UID: \"3629d037-0605-455a-8846-b96b543f8ee6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535339 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5845d078-31dd-48b9-a1f8-b3cde570370c-srv-cert\") pod \"catalog-operator-68c6474976-v6kwp\" (UID: \"5845d078-31dd-48b9-a1f8-b3cde570370c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535353 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b014bcbd-189a-4310-9105-00e0fd0f624b-serving-cert\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535368 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-csi-data-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535385 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-socket-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535413 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4jns\" (UniqueName: \"kubernetes.io/projected/e1ccc88e-b013-4c52-92b1-6e6462492c3c-kube-api-access-q4jns\") pod \"marketplace-operator-79b997595-m4vv7\" (UID: \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535433 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-869lt\" (UniqueName: \"kubernetes.io/projected/042f796e-c81a-4fd3-898c-ca596ed62bd5-kube-api-access-869lt\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535460 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ee435d24-9de6-4a34-80e7-044ae5bc1bef-stats-auth\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535484 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-mountpoint-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535504 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bnvq\" (UniqueName: \"kubernetes.io/projected/0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2-kube-api-access-2bnvq\") pod \"openshift-controller-manager-operator-756b6f6bc6-k6rmw\" (UID: \"0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535521 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee435d24-9de6-4a34-80e7-044ae5bc1bef-metrics-certs\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535542 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nqc2q\" (UID: \"5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc2q" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535559 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lsbz\" (UniqueName: \"kubernetes.io/projected/5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce-kube-api-access-5lsbz\") pod \"control-plane-machine-set-operator-78cbb6b69f-nqc2q\" (UID: \"5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc2q" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535575 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee435d24-9de6-4a34-80e7-044ae5bc1bef-service-ca-bundle\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535609 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3629d037-0605-455a-8846-b96b543f8ee6-apiservice-cert\") pod \"packageserver-d55dfcdfc-7z6z2\" (UID: \"3629d037-0605-455a-8846-b96b543f8ee6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535624 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-k6rmw\" (UID: \"0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535641 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3629d037-0605-455a-8846-b96b543f8ee6-webhook-cert\") pod \"packageserver-d55dfcdfc-7z6z2\" (UID: \"3629d037-0605-455a-8846-b96b543f8ee6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535657 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/21b137f9-b50f-4437-a188-c7303af83cb6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9r99z\" (UID: \"21b137f9-b50f-4437-a188-c7303af83cb6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9r99z" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535673 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3629d037-0605-455a-8846-b96b543f8ee6-tmpfs\") pod \"packageserver-d55dfcdfc-7z6z2\" (UID: \"3629d037-0605-455a-8846-b96b543f8ee6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535689 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l68cx\" (UniqueName: \"kubernetes.io/projected/6a126c26-a7cd-48cf-8998-2f63af48e305-kube-api-access-l68cx\") pod \"ingress-canary-8xkdn\" (UID: \"6a126c26-a7cd-48cf-8998-2f63af48e305\") " pod="openshift-ingress-canary/ingress-canary-8xkdn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535728 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6eebdfd-4211-4035-bd7e-3a689cf6528c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d6wcq\" (UID: \"f6eebdfd-4211-4035-bd7e-3a689cf6528c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535744 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ee435d24-9de6-4a34-80e7-044ae5bc1bef-default-certificate\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535759 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2tvd\" (UniqueName: \"kubernetes.io/projected/665e1554-20bb-4238-8e92-b7eb966fddc7-kube-api-access-n2tvd\") pod \"machine-config-server-4jc2v\" (UID: \"665e1554-20bb-4238-8e92-b7eb966fddc7\") " pod="openshift-machine-config-operator/machine-config-server-4jc2v" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535803 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/559b884d-1de9-433d-96a6-fc1f8b3622d4-config\") pod \"service-ca-operator-777779d784-4ncqx\" (UID: \"559b884d-1de9-433d-96a6-fc1f8b3622d4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535838 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b89be286-e9b7-43b2-97d1-222740bca95a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ncb2p\" (UID: \"b89be286-e9b7-43b2-97d1-222740bca95a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535858 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkq2d\" (UniqueName: \"kubernetes.io/projected/79047375-b11f-4aa6-ae05-1bf9981b7da7-kube-api-access-nkq2d\") pod \"olm-operator-6b444d44fb-tfxbb\" (UID: \"79047375-b11f-4aa6-ae05-1bf9981b7da7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535876 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/03b5a76f-8ef9-4b85-841f-2c4a3011d71b-proxy-tls\") pod \"machine-config-operator-74547568cd-glkgm\" (UID: \"03b5a76f-8ef9-4b85-841f-2c4a3011d71b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535903 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/03b5a76f-8ef9-4b85-841f-2c4a3011d71b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-glkgm\" (UID: \"03b5a76f-8ef9-4b85-841f-2c4a3011d71b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535918 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjh84\" (UniqueName: \"kubernetes.io/projected/b89be286-e9b7-43b2-97d1-222740bca95a-kube-api-access-fjh84\") pod \"package-server-manager-789f6589d5-ncb2p\" (UID: \"b89be286-e9b7-43b2-97d1-222740bca95a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535934 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ddmr\" (UniqueName: \"kubernetes.io/projected/21b137f9-b50f-4437-a188-c7303af83cb6-kube-api-access-5ddmr\") pod \"multus-admission-controller-857f4d67dd-9r99z\" (UID: \"21b137f9-b50f-4437-a188-c7303af83cb6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9r99z" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535959 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b014bcbd-189a-4310-9105-00e0fd0f624b-config\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b014bcbd-189a-4310-9105-00e0fd0f624b-etcd-client\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.535988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/79047375-b11f-4aa6-ae05-1bf9981b7da7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tfxbb\" (UID: \"79047375-b11f-4aa6-ae05-1bf9981b7da7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.536005 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cca10a2e-3045-4696-9f52-263ff39d8101-metrics-tls\") pod \"ingress-operator-5b745b69d9-62fbw\" (UID: \"cca10a2e-3045-4696-9f52-263ff39d8101\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.536026 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cca10a2e-3045-4696-9f52-263ff39d8101-bound-sa-token\") pod \"ingress-operator-5b745b69d9-62fbw\" (UID: \"cca10a2e-3045-4696-9f52-263ff39d8101\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.536048 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dk8h\" (UniqueName: \"kubernetes.io/projected/ee435d24-9de6-4a34-80e7-044ae5bc1bef-kube-api-access-6dk8h\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.536076 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5845d078-31dd-48b9-a1f8-b3cde570370c-profile-collector-cert\") pod \"catalog-operator-68c6474976-v6kwp\" (UID: \"5845d078-31dd-48b9-a1f8-b3cde570370c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.536144 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cca10a2e-3045-4696-9f52-263ff39d8101-trusted-ca\") pod \"ingress-operator-5b745b69d9-62fbw\" (UID: \"cca10a2e-3045-4696-9f52-263ff39d8101\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.536161 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fc3207-2d0d-48f2-aa03-7136bcd5823f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-smk65\" (UID: \"e1fc3207-2d0d-48f2-aa03-7136bcd5823f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.536175 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a126c26-a7cd-48cf-8998-2f63af48e305-cert\") pod \"ingress-canary-8xkdn\" (UID: \"6a126c26-a7cd-48cf-8998-2f63af48e305\") " pod="openshift-ingress-canary/ingress-canary-8xkdn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.536192 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e291aef6-bbde-41a5-9981-96b992547e03-config-volume\") pod \"collect-profiles-29323815-xbthn\" (UID: \"e291aef6-bbde-41a5-9981-96b992547e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.536207 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b014bcbd-189a-4310-9105-00e0fd0f624b-etcd-ca\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.536224 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1ccc88e-b013-4c52-92b1-6e6462492c3c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m4vv7\" (UID: \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.536240 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv6jc\" (UniqueName: \"kubernetes.io/projected/0f4ffc00-40b8-4dd1-9ebc-775e5cce2490-kube-api-access-nv6jc\") pod \"migrator-59844c95c7-cx5vm\" (UID: \"0f4ffc00-40b8-4dd1-9ebc-775e5cce2490\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx5vm" Oct 02 18:23:21 crc kubenswrapper[4832]: E1002 18:23:21.536631 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:22.036615196 +0000 UTC m=+159.006058068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.558479 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.563942 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf"] Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.595839 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2tvd\" (UniqueName: \"kubernetes.io/projected/665e1554-20bb-4238-8e92-b7eb966fddc7-kube-api-access-n2tvd\") pod \"machine-config-server-4jc2v\" (UID: \"665e1554-20bb-4238-8e92-b7eb966fddc7\") " pod="openshift-machine-config-operator/machine-config-server-4jc2v" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638202 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/559b884d-1de9-433d-96a6-fc1f8b3622d4-config\") pod \"service-ca-operator-777779d784-4ncqx\" (UID: \"559b884d-1de9-433d-96a6-fc1f8b3622d4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638232 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b89be286-e9b7-43b2-97d1-222740bca95a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ncb2p\" (UID: \"b89be286-e9b7-43b2-97d1-222740bca95a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638252 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkq2d\" (UniqueName: \"kubernetes.io/projected/79047375-b11f-4aa6-ae05-1bf9981b7da7-kube-api-access-nkq2d\") pod \"olm-operator-6b444d44fb-tfxbb\" (UID: \"79047375-b11f-4aa6-ae05-1bf9981b7da7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638570 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/03b5a76f-8ef9-4b85-841f-2c4a3011d71b-proxy-tls\") pod \"machine-config-operator-74547568cd-glkgm\" (UID: \"03b5a76f-8ef9-4b85-841f-2c4a3011d71b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638595 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/03b5a76f-8ef9-4b85-841f-2c4a3011d71b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-glkgm\" (UID: \"03b5a76f-8ef9-4b85-841f-2c4a3011d71b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638620 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjh84\" (UniqueName: \"kubernetes.io/projected/b89be286-e9b7-43b2-97d1-222740bca95a-kube-api-access-fjh84\") pod \"package-server-manager-789f6589d5-ncb2p\" (UID: \"b89be286-e9b7-43b2-97d1-222740bca95a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638646 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ddmr\" (UniqueName: \"kubernetes.io/projected/21b137f9-b50f-4437-a188-c7303af83cb6-kube-api-access-5ddmr\") pod \"multus-admission-controller-857f4d67dd-9r99z\" (UID: \"21b137f9-b50f-4437-a188-c7303af83cb6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9r99z" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638670 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b014bcbd-189a-4310-9105-00e0fd0f624b-config\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638687 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b014bcbd-189a-4310-9105-00e0fd0f624b-etcd-client\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638707 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/79047375-b11f-4aa6-ae05-1bf9981b7da7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tfxbb\" (UID: \"79047375-b11f-4aa6-ae05-1bf9981b7da7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638728 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cca10a2e-3045-4696-9f52-263ff39d8101-metrics-tls\") pod \"ingress-operator-5b745b69d9-62fbw\" (UID: \"cca10a2e-3045-4696-9f52-263ff39d8101\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638748 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cca10a2e-3045-4696-9f52-263ff39d8101-bound-sa-token\") pod \"ingress-operator-5b745b69d9-62fbw\" (UID: \"cca10a2e-3045-4696-9f52-263ff39d8101\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638782 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dk8h\" (UniqueName: \"kubernetes.io/projected/ee435d24-9de6-4a34-80e7-044ae5bc1bef-kube-api-access-6dk8h\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638818 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5845d078-31dd-48b9-a1f8-b3cde570370c-profile-collector-cert\") pod \"catalog-operator-68c6474976-v6kwp\" (UID: \"5845d078-31dd-48b9-a1f8-b3cde570370c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638855 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cca10a2e-3045-4696-9f52-263ff39d8101-trusted-ca\") pod \"ingress-operator-5b745b69d9-62fbw\" (UID: \"cca10a2e-3045-4696-9f52-263ff39d8101\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638877 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fc3207-2d0d-48f2-aa03-7136bcd5823f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-smk65\" (UID: \"e1fc3207-2d0d-48f2-aa03-7136bcd5823f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638896 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a126c26-a7cd-48cf-8998-2f63af48e305-cert\") pod \"ingress-canary-8xkdn\" (UID: \"6a126c26-a7cd-48cf-8998-2f63af48e305\") " pod="openshift-ingress-canary/ingress-canary-8xkdn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638913 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e291aef6-bbde-41a5-9981-96b992547e03-config-volume\") pod \"collect-profiles-29323815-xbthn\" (UID: \"e291aef6-bbde-41a5-9981-96b992547e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638930 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b014bcbd-189a-4310-9105-00e0fd0f624b-etcd-ca\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638946 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1ccc88e-b013-4c52-92b1-6e6462492c3c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m4vv7\" (UID: \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.638979 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv6jc\" (UniqueName: \"kubernetes.io/projected/0f4ffc00-40b8-4dd1-9ebc-775e5cce2490-kube-api-access-nv6jc\") pod \"migrator-59844c95c7-cx5vm\" (UID: \"0f4ffc00-40b8-4dd1-9ebc-775e5cce2490\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx5vm" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639022 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1fc3207-2d0d-48f2-aa03-7136bcd5823f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-smk65\" (UID: \"e1fc3207-2d0d-48f2-aa03-7136bcd5823f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639037 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b014bcbd-189a-4310-9105-00e0fd0f624b-etcd-service-ca\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639056 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6eebdfd-4211-4035-bd7e-3a689cf6528c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d6wcq\" (UID: \"f6eebdfd-4211-4035-bd7e-3a689cf6528c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639072 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-plugins-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639088 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1fc3207-2d0d-48f2-aa03-7136bcd5823f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-smk65\" (UID: \"e1fc3207-2d0d-48f2-aa03-7136bcd5823f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639107 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/665e1554-20bb-4238-8e92-b7eb966fddc7-node-bootstrap-token\") pod \"machine-config-server-4jc2v\" (UID: \"665e1554-20bb-4238-8e92-b7eb966fddc7\") " pod="openshift-machine-config-operator/machine-config-server-4jc2v" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639129 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-registration-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639153 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/563b1a80-432b-4eb1-b3d5-cf2843736168-metrics-tls\") pod \"dns-operator-744455d44c-tshzp\" (UID: \"563b1a80-432b-4eb1-b3d5-cf2843736168\") " pod="openshift-dns-operator/dns-operator-744455d44c-tshzp" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639173 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plw7h\" (UniqueName: \"kubernetes.io/projected/03b5a76f-8ef9-4b85-841f-2c4a3011d71b-kube-api-access-plw7h\") pod \"machine-config-operator-74547568cd-glkgm\" (UID: \"03b5a76f-8ef9-4b85-841f-2c4a3011d71b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639199 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-k6rmw\" (UID: \"0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639218 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/665e1554-20bb-4238-8e92-b7eb966fddc7-certs\") pod \"machine-config-server-4jc2v\" (UID: \"665e1554-20bb-4238-8e92-b7eb966fddc7\") " pod="openshift-machine-config-operator/machine-config-server-4jc2v" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639236 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/79047375-b11f-4aa6-ae05-1bf9981b7da7-srv-cert\") pod \"olm-operator-6b444d44fb-tfxbb\" (UID: \"79047375-b11f-4aa6-ae05-1bf9981b7da7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639253 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldk7b\" (UniqueName: \"kubernetes.io/projected/563b1a80-432b-4eb1-b3d5-cf2843736168-kube-api-access-ldk7b\") pod \"dns-operator-744455d44c-tshzp\" (UID: \"563b1a80-432b-4eb1-b3d5-cf2843736168\") " pod="openshift-dns-operator/dns-operator-744455d44c-tshzp" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639379 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwr2c\" (UniqueName: \"kubernetes.io/projected/cca10a2e-3045-4696-9f52-263ff39d8101-kube-api-access-mwr2c\") pod \"ingress-operator-5b745b69d9-62fbw\" (UID: \"cca10a2e-3045-4696-9f52-263ff39d8101\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639407 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/03b5a76f-8ef9-4b85-841f-2c4a3011d71b-images\") pod \"machine-config-operator-74547568cd-glkgm\" (UID: \"03b5a76f-8ef9-4b85-841f-2c4a3011d71b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639429 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcrps\" (UniqueName: \"kubernetes.io/projected/f6eebdfd-4211-4035-bd7e-3a689cf6528c-kube-api-access-rcrps\") pod \"kube-storage-version-migrator-operator-b67b599dd-d6wcq\" (UID: \"f6eebdfd-4211-4035-bd7e-3a689cf6528c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639454 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e1ccc88e-b013-4c52-92b1-6e6462492c3c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m4vv7\" (UID: \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639476 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/559b884d-1de9-433d-96a6-fc1f8b3622d4-serving-cert\") pod \"service-ca-operator-777779d784-4ncqx\" (UID: \"559b884d-1de9-433d-96a6-fc1f8b3622d4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639520 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m7r2\" (UniqueName: \"kubernetes.io/projected/559b884d-1de9-433d-96a6-fc1f8b3622d4-kube-api-access-4m7r2\") pod \"service-ca-operator-777779d784-4ncqx\" (UID: \"559b884d-1de9-433d-96a6-fc1f8b3622d4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639548 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krmn2\" (UniqueName: \"kubernetes.io/projected/e291aef6-bbde-41a5-9981-96b992547e03-kube-api-access-krmn2\") pod \"collect-profiles-29323815-xbthn\" (UID: \"e291aef6-bbde-41a5-9981-96b992547e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639571 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639588 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e291aef6-bbde-41a5-9981-96b992547e03-secret-volume\") pod \"collect-profiles-29323815-xbthn\" (UID: \"e291aef6-bbde-41a5-9981-96b992547e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639606 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g78sr\" (UniqueName: \"kubernetes.io/projected/5845d078-31dd-48b9-a1f8-b3cde570370c-kube-api-access-g78sr\") pod \"catalog-operator-68c6474976-v6kwp\" (UID: \"5845d078-31dd-48b9-a1f8-b3cde570370c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639626 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29c6z\" (UniqueName: \"kubernetes.io/projected/b014bcbd-189a-4310-9105-00e0fd0f624b-kube-api-access-29c6z\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639644 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzmcf\" (UniqueName: \"kubernetes.io/projected/3629d037-0605-455a-8846-b96b543f8ee6-kube-api-access-jzmcf\") pod \"packageserver-d55dfcdfc-7z6z2\" (UID: \"3629d037-0605-455a-8846-b96b543f8ee6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639670 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5845d078-31dd-48b9-a1f8-b3cde570370c-srv-cert\") pod \"catalog-operator-68c6474976-v6kwp\" (UID: \"5845d078-31dd-48b9-a1f8-b3cde570370c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639686 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b014bcbd-189a-4310-9105-00e0fd0f624b-serving-cert\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639702 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-csi-data-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639718 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-socket-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639736 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4jns\" (UniqueName: \"kubernetes.io/projected/e1ccc88e-b013-4c52-92b1-6e6462492c3c-kube-api-access-q4jns\") pod \"marketplace-operator-79b997595-m4vv7\" (UID: \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639753 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-869lt\" (UniqueName: \"kubernetes.io/projected/042f796e-c81a-4fd3-898c-ca596ed62bd5-kube-api-access-869lt\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639770 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ee435d24-9de6-4a34-80e7-044ae5bc1bef-stats-auth\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639788 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-mountpoint-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639806 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bnvq\" (UniqueName: \"kubernetes.io/projected/0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2-kube-api-access-2bnvq\") pod \"openshift-controller-manager-operator-756b6f6bc6-k6rmw\" (UID: \"0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639824 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee435d24-9de6-4a34-80e7-044ae5bc1bef-metrics-certs\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639841 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nqc2q\" (UID: \"5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc2q" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639859 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lsbz\" (UniqueName: \"kubernetes.io/projected/5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce-kube-api-access-5lsbz\") pod \"control-plane-machine-set-operator-78cbb6b69f-nqc2q\" (UID: \"5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc2q" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee435d24-9de6-4a34-80e7-044ae5bc1bef-service-ca-bundle\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639893 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3629d037-0605-455a-8846-b96b543f8ee6-apiservice-cert\") pod \"packageserver-d55dfcdfc-7z6z2\" (UID: \"3629d037-0605-455a-8846-b96b543f8ee6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639910 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-k6rmw\" (UID: \"0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639929 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3629d037-0605-455a-8846-b96b543f8ee6-webhook-cert\") pod \"packageserver-d55dfcdfc-7z6z2\" (UID: \"3629d037-0605-455a-8846-b96b543f8ee6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639945 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/21b137f9-b50f-4437-a188-c7303af83cb6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9r99z\" (UID: \"21b137f9-b50f-4437-a188-c7303af83cb6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9r99z" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639960 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3629d037-0605-455a-8846-b96b543f8ee6-tmpfs\") pod \"packageserver-d55dfcdfc-7z6z2\" (UID: \"3629d037-0605-455a-8846-b96b543f8ee6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639976 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l68cx\" (UniqueName: \"kubernetes.io/projected/6a126c26-a7cd-48cf-8998-2f63af48e305-kube-api-access-l68cx\") pod \"ingress-canary-8xkdn\" (UID: \"6a126c26-a7cd-48cf-8998-2f63af48e305\") " pod="openshift-ingress-canary/ingress-canary-8xkdn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639979 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/03b5a76f-8ef9-4b85-841f-2c4a3011d71b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-glkgm\" (UID: \"03b5a76f-8ef9-4b85-841f-2c4a3011d71b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.639994 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6eebdfd-4211-4035-bd7e-3a689cf6528c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d6wcq\" (UID: \"f6eebdfd-4211-4035-bd7e-3a689cf6528c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.640107 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ee435d24-9de6-4a34-80e7-044ae5bc1bef-default-certificate\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.640106 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/559b884d-1de9-433d-96a6-fc1f8b3622d4-config\") pod \"service-ca-operator-777779d784-4ncqx\" (UID: \"559b884d-1de9-433d-96a6-fc1f8b3622d4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.643867 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b014bcbd-189a-4310-9105-00e0fd0f624b-etcd-ca\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.644625 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/03b5a76f-8ef9-4b85-841f-2c4a3011d71b-images\") pod \"machine-config-operator-74547568cd-glkgm\" (UID: \"03b5a76f-8ef9-4b85-841f-2c4a3011d71b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.647021 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e1ccc88e-b013-4c52-92b1-6e6462492c3c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m4vv7\" (UID: \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.647063 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ee435d24-9de6-4a34-80e7-044ae5bc1bef-default-certificate\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.649099 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee435d24-9de6-4a34-80e7-044ae5bc1bef-service-ca-bundle\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.649447 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b89be286-e9b7-43b2-97d1-222740bca95a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ncb2p\" (UID: \"b89be286-e9b7-43b2-97d1-222740bca95a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p" Oct 02 18:23:21 crc kubenswrapper[4832]: E1002 18:23:21.649679 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:22.149661299 +0000 UTC m=+159.119104171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.650281 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b014bcbd-189a-4310-9105-00e0fd0f624b-config\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.650258 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fc3207-2d0d-48f2-aa03-7136bcd5823f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-smk65\" (UID: \"e1fc3207-2d0d-48f2-aa03-7136bcd5823f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.651864 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ee435d24-9de6-4a34-80e7-044ae5bc1bef-stats-auth\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.652837 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6eebdfd-4211-4035-bd7e-3a689cf6528c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d6wcq\" (UID: \"f6eebdfd-4211-4035-bd7e-3a689cf6528c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.653346 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/559b884d-1de9-433d-96a6-fc1f8b3622d4-serving-cert\") pod \"service-ca-operator-777779d784-4ncqx\" (UID: \"559b884d-1de9-433d-96a6-fc1f8b3622d4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.655100 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b014bcbd-189a-4310-9105-00e0fd0f624b-serving-cert\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.655820 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1fc3207-2d0d-48f2-aa03-7136bcd5823f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-smk65\" (UID: \"e1fc3207-2d0d-48f2-aa03-7136bcd5823f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.656245 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b014bcbd-189a-4310-9105-00e0fd0f624b-etcd-service-ca\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.656334 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-plugins-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.657256 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-registration-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.657290 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-socket-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.658320 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-k6rmw\" (UID: \"0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.660642 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cca10a2e-3045-4696-9f52-263ff39d8101-trusted-ca\") pod \"ingress-operator-5b745b69d9-62fbw\" (UID: \"cca10a2e-3045-4696-9f52-263ff39d8101\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.661187 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6eebdfd-4211-4035-bd7e-3a689cf6528c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d6wcq\" (UID: \"f6eebdfd-4211-4035-bd7e-3a689cf6528c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.662714 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3629d037-0605-455a-8846-b96b543f8ee6-apiservice-cert\") pod \"packageserver-d55dfcdfc-7z6z2\" (UID: \"3629d037-0605-455a-8846-b96b543f8ee6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.663606 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1ccc88e-b013-4c52-92b1-6e6462492c3c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m4vv7\" (UID: \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.664486 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/79047375-b11f-4aa6-ae05-1bf9981b7da7-srv-cert\") pod \"olm-operator-6b444d44fb-tfxbb\" (UID: \"79047375-b11f-4aa6-ae05-1bf9981b7da7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.665606 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e291aef6-bbde-41a5-9981-96b992547e03-config-volume\") pod \"collect-profiles-29323815-xbthn\" (UID: \"e291aef6-bbde-41a5-9981-96b992547e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.666022 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nqc2q\" (UID: \"5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc2q" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.666067 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.666323 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee435d24-9de6-4a34-80e7-044ae5bc1bef-metrics-certs\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.666502 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-mountpoint-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.666739 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5845d078-31dd-48b9-a1f8-b3cde570370c-profile-collector-cert\") pod \"catalog-operator-68c6474976-v6kwp\" (UID: \"5845d078-31dd-48b9-a1f8-b3cde570370c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.667580 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.667948 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/042f796e-c81a-4fd3-898c-ca596ed62bd5-csi-data-dir\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.668765 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/79047375-b11f-4aa6-ae05-1bf9981b7da7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tfxbb\" (UID: \"79047375-b11f-4aa6-ae05-1bf9981b7da7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.669280 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3629d037-0605-455a-8846-b96b543f8ee6-tmpfs\") pod \"packageserver-d55dfcdfc-7z6z2\" (UID: \"3629d037-0605-455a-8846-b96b543f8ee6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.669503 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-jth5v" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.672000 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b014bcbd-189a-4310-9105-00e0fd0f624b-etcd-client\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.673105 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cca10a2e-3045-4696-9f52-263ff39d8101-metrics-tls\") pod \"ingress-operator-5b745b69d9-62fbw\" (UID: \"cca10a2e-3045-4696-9f52-263ff39d8101\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.675226 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5845d078-31dd-48b9-a1f8-b3cde570370c-srv-cert\") pod \"catalog-operator-68c6474976-v6kwp\" (UID: \"5845d078-31dd-48b9-a1f8-b3cde570370c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.681053 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3629d037-0605-455a-8846-b96b543f8ee6-webhook-cert\") pod \"packageserver-d55dfcdfc-7z6z2\" (UID: \"3629d037-0605-455a-8846-b96b543f8ee6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.681211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e291aef6-bbde-41a5-9981-96b992547e03-secret-volume\") pod \"collect-profiles-29323815-xbthn\" (UID: \"e291aef6-bbde-41a5-9981-96b992547e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.681345 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6a126c26-a7cd-48cf-8998-2f63af48e305-cert\") pod \"ingress-canary-8xkdn\" (UID: \"6a126c26-a7cd-48cf-8998-2f63af48e305\") " pod="openshift-ingress-canary/ingress-canary-8xkdn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.683579 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/563b1a80-432b-4eb1-b3d5-cf2843736168-metrics-tls\") pod \"dns-operator-744455d44c-tshzp\" (UID: \"563b1a80-432b-4eb1-b3d5-cf2843736168\") " pod="openshift-dns-operator/dns-operator-744455d44c-tshzp" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.685307 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/665e1554-20bb-4238-8e92-b7eb966fddc7-node-bootstrap-token\") pod \"machine-config-server-4jc2v\" (UID: \"665e1554-20bb-4238-8e92-b7eb966fddc7\") " pod="openshift-machine-config-operator/machine-config-server-4jc2v" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.688448 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/21b137f9-b50f-4437-a188-c7303af83cb6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9r99z\" (UID: \"21b137f9-b50f-4437-a188-c7303af83cb6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9r99z" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.690135 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-k6rmw\" (UID: \"0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.694011 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwr2c\" (UniqueName: \"kubernetes.io/projected/cca10a2e-3045-4696-9f52-263ff39d8101-kube-api-access-mwr2c\") pod \"ingress-operator-5b745b69d9-62fbw\" (UID: \"cca10a2e-3045-4696-9f52-263ff39d8101\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.695839 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/665e1554-20bb-4238-8e92-b7eb966fddc7-certs\") pod \"machine-config-server-4jc2v\" (UID: \"665e1554-20bb-4238-8e92-b7eb966fddc7\") " pod="openshift-machine-config-operator/machine-config-server-4jc2v" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.698976 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/03b5a76f-8ef9-4b85-841f-2c4a3011d71b-proxy-tls\") pod \"machine-config-operator-74547568cd-glkgm\" (UID: \"03b5a76f-8ef9-4b85-841f-2c4a3011d71b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.713163 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bnvq\" (UniqueName: \"kubernetes.io/projected/0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2-kube-api-access-2bnvq\") pod \"openshift-controller-manager-operator-756b6f6bc6-k6rmw\" (UID: \"0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.739356 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcrps\" (UniqueName: \"kubernetes.io/projected/f6eebdfd-4211-4035-bd7e-3a689cf6528c-kube-api-access-rcrps\") pod \"kube-storage-version-migrator-operator-b67b599dd-d6wcq\" (UID: \"f6eebdfd-4211-4035-bd7e-3a689cf6528c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.743825 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:21 crc kubenswrapper[4832]: E1002 18:23:21.744293 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:22.244276677 +0000 UTC m=+159.213719549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.755756 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2tvd\" (UniqueName: \"kubernetes.io/projected/665e1554-20bb-4238-8e92-b7eb966fddc7-kube-api-access-n2tvd\") pod \"machine-config-server-4jc2v\" (UID: \"665e1554-20bb-4238-8e92-b7eb966fddc7\") " pod="openshift-machine-config-operator/machine-config-server-4jc2v" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.768658 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjh84\" (UniqueName: \"kubernetes.io/projected/b89be286-e9b7-43b2-97d1-222740bca95a-kube-api-access-fjh84\") pod \"package-server-manager-789f6589d5-ncb2p\" (UID: \"b89be286-e9b7-43b2-97d1-222740bca95a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.790656 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ddmr\" (UniqueName: \"kubernetes.io/projected/21b137f9-b50f-4437-a188-c7303af83cb6-kube-api-access-5ddmr\") pod \"multus-admission-controller-857f4d67dd-9r99z\" (UID: \"21b137f9-b50f-4437-a188-c7303af83cb6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9r99z" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.802006 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4jc2v" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.809407 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29c6z\" (UniqueName: \"kubernetes.io/projected/b014bcbd-189a-4310-9105-00e0fd0f624b-kube-api-access-29c6z\") pod \"etcd-operator-b45778765-hdqc9\" (UID: \"b014bcbd-189a-4310-9105-00e0fd0f624b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.832224 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4jns\" (UniqueName: \"kubernetes.io/projected/e1ccc88e-b013-4c52-92b1-6e6462492c3c-kube-api-access-q4jns\") pod \"marketplace-operator-79b997595-m4vv7\" (UID: \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.845461 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:21 crc kubenswrapper[4832]: E1002 18:23:21.845919 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:22.345901193 +0000 UTC m=+159.315344065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.858991 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m7r2\" (UniqueName: \"kubernetes.io/projected/559b884d-1de9-433d-96a6-fc1f8b3622d4-kube-api-access-4m7r2\") pod \"service-ca-operator-777779d784-4ncqx\" (UID: \"559b884d-1de9-433d-96a6-fc1f8b3622d4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.865339 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.875995 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krmn2\" (UniqueName: \"kubernetes.io/projected/e291aef6-bbde-41a5-9981-96b992547e03-kube-api-access-krmn2\") pod \"collect-profiles-29323815-xbthn\" (UID: \"e291aef6-bbde-41a5-9981-96b992547e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.881214 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.894306 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-869lt\" (UniqueName: \"kubernetes.io/projected/042f796e-c81a-4fd3-898c-ca596ed62bd5-kube-api-access-869lt\") pod \"csi-hostpathplugin-tpvsc\" (UID: \"042f796e-c81a-4fd3-898c-ca596ed62bd5\") " pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.907659 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7q5cd"] Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.909566 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv6jc\" (UniqueName: \"kubernetes.io/projected/0f4ffc00-40b8-4dd1-9ebc-775e5cce2490-kube-api-access-nv6jc\") pod \"migrator-59844c95c7-cx5vm\" (UID: \"0f4ffc00-40b8-4dd1-9ebc-775e5cce2490\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx5vm" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.927414 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx5vm" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.927441 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.929784 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw"] Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.931306 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzmcf\" (UniqueName: \"kubernetes.io/projected/3629d037-0605-455a-8846-b96b543f8ee6-kube-api-access-jzmcf\") pod \"packageserver-d55dfcdfc-7z6z2\" (UID: \"3629d037-0605-455a-8846-b96b543f8ee6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.946482 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:21 crc kubenswrapper[4832]: E1002 18:23:21.946650 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:22.446622981 +0000 UTC m=+159.416065853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.946747 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:21 crc kubenswrapper[4832]: E1002 18:23:21.947386 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:22.447366204 +0000 UTC m=+159.416809096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.951003 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1fc3207-2d0d-48f2-aa03-7136bcd5823f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-smk65\" (UID: \"e1fc3207-2d0d-48f2-aa03-7136bcd5823f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.967407 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lsbz\" (UniqueName: \"kubernetes.io/projected/5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce-kube-api-access-5lsbz\") pod \"control-plane-machine-set-operator-78cbb6b69f-nqc2q\" (UID: \"5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc2q" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.973793 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9r99z" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.982086 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p" Oct 02 18:23:21 crc kubenswrapper[4832]: I1002 18:23:21.992951 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fjcjg"] Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.013632 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g78sr\" (UniqueName: \"kubernetes.io/projected/5845d078-31dd-48b9-a1f8-b3cde570370c-kube-api-access-g78sr\") pod \"catalog-operator-68c6474976-v6kwp\" (UID: \"5845d078-31dd-48b9-a1f8-b3cde570370c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.016590 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkq2d\" (UniqueName: \"kubernetes.io/projected/79047375-b11f-4aa6-ae05-1bf9981b7da7-kube-api-access-nkq2d\") pod \"olm-operator-6b444d44fb-tfxbb\" (UID: \"79047375-b11f-4aa6-ae05-1bf9981b7da7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" Oct 02 18:23:22 crc kubenswrapper[4832]: W1002 18:23:22.017233 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod934c65a0_d9a8_484b_828e_b5b5db8b9575.slice/crio-440d29d1263b775ff8cf92c8e6e46e304d13c55f16622ced4c7a05b5f543423e WatchSource:0}: Error finding container 440d29d1263b775ff8cf92c8e6e46e304d13c55f16622ced4c7a05b5f543423e: Status 404 returned error can't find the container with id 440d29d1263b775ff8cf92c8e6e46e304d13c55f16622ced4c7a05b5f543423e Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.020733 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.024731 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4"] Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.034623 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.035114 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.040612 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.041068 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2"] Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.048871 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:22 crc kubenswrapper[4832]: E1002 18:23:22.049191 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:22.549176487 +0000 UTC m=+159.518619359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.055388 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.058752 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-ljdjq"] Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.061997 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldk7b\" (UniqueName: \"kubernetes.io/projected/563b1a80-432b-4eb1-b3d5-cf2843736168-kube-api-access-ldk7b\") pod \"dns-operator-744455d44c-tshzp\" (UID: \"563b1a80-432b-4eb1-b3d5-cf2843736168\") " pod="openshift-dns-operator/dns-operator-744455d44c-tshzp" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.070622 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.071663 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plw7h\" (UniqueName: \"kubernetes.io/projected/03b5a76f-8ef9-4b85-841f-2c4a3011d71b-kube-api-access-plw7h\") pod \"machine-config-operator-74547568cd-glkgm\" (UID: \"03b5a76f-8ef9-4b85-841f-2c4a3011d71b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.094030 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l68cx\" (UniqueName: \"kubernetes.io/projected/6a126c26-a7cd-48cf-8998-2f63af48e305-kube-api-access-l68cx\") pod \"ingress-canary-8xkdn\" (UID: \"6a126c26-a7cd-48cf-8998-2f63af48e305\") " pod="openshift-ingress-canary/ingress-canary-8xkdn" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.097827 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.118438 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7q5cd" event={"ID":"934c65a0-d9a8-484b-828e-b5b5db8b9575","Type":"ContainerStarted","Data":"440d29d1263b775ff8cf92c8e6e46e304d13c55f16622ced4c7a05b5f543423e"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.121408 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cca10a2e-3045-4696-9f52-263ff39d8101-bound-sa-token\") pod \"ingress-operator-5b745b69d9-62fbw\" (UID: \"cca10a2e-3045-4696-9f52-263ff39d8101\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.123112 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dk8h\" (UniqueName: \"kubernetes.io/projected/ee435d24-9de6-4a34-80e7-044ae5bc1bef-kube-api-access-6dk8h\") pod \"router-default-5444994796-k5cpd\" (UID: \"ee435d24-9de6-4a34-80e7-044ae5bc1bef\") " pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.127479 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" event={"ID":"37a9ae43-eef2-461a-a58d-c32b18ae74bc","Type":"ContainerStarted","Data":"4700345f93a5f7854e4131ebc6beb2fd4443e66e9cba843253c82254e3c60c40"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.130737 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" event={"ID":"320193d7-edcc-4e8e-8e95-8da631ea5a64","Type":"ContainerStarted","Data":"0b4b2bb9eb681d58a6060bcebdec804dcfacd796c0eca25d8ebe5ead7a454ebb"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.133232 4832 generic.go:334] "Generic (PLEG): container finished" podID="6d7bebee-b537-4cf4-b00e-1051dac6aed6" containerID="0410d255005e96e77ad78e0ca5849e9da8e228b957d1403e1b2b4bb83dbdc21e" exitCode=0 Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.133296 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" event={"ID":"6d7bebee-b537-4cf4-b00e-1051dac6aed6","Type":"ContainerDied","Data":"0410d255005e96e77ad78e0ca5849e9da8e228b957d1403e1b2b4bb83dbdc21e"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.133316 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" event={"ID":"6d7bebee-b537-4cf4-b00e-1051dac6aed6","Type":"ContainerStarted","Data":"48106a7c7c23bf5f13ce6994771794bf3fe7d8c5bba78449c4db4f9b8fe151d5"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.137402 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" event={"ID":"245c924a-8033-464a-be07-6e7ebbb7d814","Type":"ContainerStarted","Data":"bdafae6aff7485ad592315bfa3a0e2faf89b33f8bf2f19a42be2044ddbf87971"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.137472 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" event={"ID":"245c924a-8033-464a-be07-6e7ebbb7d814","Type":"ContainerStarted","Data":"a54b626116619bd052d43850f12a2ac4d4c1cac2ae862f82009dc4a269613ff1"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.137981 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.138725 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" event={"ID":"ae2a13a2-cd3b-40ea-bb53-edbd0449781b","Type":"ContainerStarted","Data":"4ee5afb0a35ec235720eee65e0bee05b6db9a82a73306f4b24815fabbee29e9a"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.138759 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" event={"ID":"ae2a13a2-cd3b-40ea-bb53-edbd0449781b","Type":"ContainerStarted","Data":"a15dc742abeab08a1d251a443ec08df94746bec4804ab481c863ac7b2b55171a"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.141164 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" event={"ID":"34d934c3-20c9-4091-844a-e4db7482d8e0","Type":"ContainerStarted","Data":"1690cfdad24938530162d7d9f1b229d2382609cf117aa60a3fa6c8de151110ed"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.141200 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" event={"ID":"34d934c3-20c9-4091-844a-e4db7482d8e0","Type":"ContainerStarted","Data":"9632e62653e434e90ef049915c8f6dd2338cc64f52068cacb8a165e1cf1f56bf"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.141210 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" event={"ID":"34d934c3-20c9-4091-844a-e4db7482d8e0","Type":"ContainerStarted","Data":"269bb83017b6eade331bbee1cae80230dee177d82055184befa03af54f731264"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.147130 4832 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xgwwf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.147204 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" podUID="245c924a-8033-464a-be07-6e7ebbb7d814" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.151155 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:22 crc kubenswrapper[4832]: E1002 18:23:22.151768 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:22.651745012 +0000 UTC m=+159.621188084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.151948 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986"] Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.156413 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nk8bt" event={"ID":"c501c3e2-851d-452a-9fd1-0cdb21ac15e6","Type":"ContainerStarted","Data":"27e76b61afa05399ca438260fb02a9cc7b3cf2f1e7e544e527a765b91a0ba817"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.156468 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nk8bt" event={"ID":"c501c3e2-851d-452a-9fd1-0cdb21ac15e6","Type":"ContainerStarted","Data":"7f3ef2b054800f1c0a6e2aea56dcb035f1d435cd9a0092e418ef2082e62e1d78"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.158021 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-nk8bt" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.158782 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-nk8bt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.158860 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nk8bt" podUID="c501c3e2-851d-452a-9fd1-0cdb21ac15e6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.161299 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" event={"ID":"4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d","Type":"ContainerStarted","Data":"51fbfb770a346b772452fc2a91f19282d5465895c2c5b3352005e726d776e386"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.165423 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4jc2v" event={"ID":"665e1554-20bb-4238-8e92-b7eb966fddc7","Type":"ContainerStarted","Data":"78b2439c60207684f37d16a5f66dc8d470a70b890d8860dd12b27cf113232d53"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.176530 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fjcjg" event={"ID":"7860295f-4280-4d00-acc0-119ded425125","Type":"ContainerStarted","Data":"6ed54507bdbe9b12776d07798e8bc8e78e2d3f0643718bd052d55c63ebd79377"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.186601 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" event={"ID":"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1","Type":"ContainerStarted","Data":"a300fbdf90576014c4b4317376268e273fb0236e0b2ff19b3b65024f5232b85c"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.186615 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-tshzp" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.186664 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" event={"ID":"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1","Type":"ContainerStarted","Data":"7bd19dd68c94df6919c2991690a782a3a4702695b813ebe0cba9804db29e33a3"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.187016 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.190779 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7ffgv" event={"ID":"b562a645-10d9-44f7-a4fe-d3bf63ac9185","Type":"ContainerStarted","Data":"f1940dec42c8eeaad55c01cfa6cdf5b1e4d14cba92812adbc95f305952047da6"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.190818 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7ffgv" event={"ID":"b562a645-10d9-44f7-a4fe-d3bf63ac9185","Type":"ContainerStarted","Data":"b22d68221f6c7cc1c64e68081cb575dc4c520c4d72d7c3b56546a1520e083c5c"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.193751 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" event={"ID":"a7c249ac-abfd-42b2-b391-5018d1695100","Type":"ContainerStarted","Data":"2046419c7d3a119d092ba21e4ec46ed3b6c3327307604a1b8cbe0423d6eb68dc"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.193813 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" event={"ID":"a7c249ac-abfd-42b2-b391-5018d1695100","Type":"ContainerStarted","Data":"177f6bccad979fc0aa345a344515ec17c1c7d343da0e8ac5a7e1f8338b9e1483"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.194743 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.198327 4832 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fgcrc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.5:6443/healthz\": dial tcp 10.217.0.5:6443: connect: connection refused" start-of-body= Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.198379 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" podUID="f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.5:6443/healthz\": dial tcp 10.217.0.5:6443: connect: connection refused" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.198491 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" event={"ID":"07068ae6-441b-4211-bd1f-e219157b4bb2","Type":"ContainerStarted","Data":"6470c86daf3aa188150cfeb69f12209f69ccf87725957e85c57aeb0f936bc3e8"} Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.198810 4832 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-w89r8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.198869 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" podUID="a7c249ac-abfd-42b2-b391-5018d1695100" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.203467 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.209973 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.259963 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc2q" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.260226 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:22 crc kubenswrapper[4832]: E1002 18:23:22.260547 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:22.760530622 +0000 UTC m=+159.729973494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.291579 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.322461 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.353975 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-jth5v"] Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.362514 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc"] Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.365425 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:22 crc kubenswrapper[4832]: E1002 18:23:22.367416 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:22.867402712 +0000 UTC m=+159.836845584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.376356 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8xkdn" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.389950 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d"] Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.467162 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:22 crc kubenswrapper[4832]: E1002 18:23:22.468638 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:22.968621456 +0000 UTC m=+159.938064328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.474066 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" podStartSLOduration=129.474052856 podStartE2EDuration="2m9.474052856s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:22.473181399 +0000 UTC m=+159.442624261" watchObservedRunningTime="2025-10-02 18:23:22.474052856 +0000 UTC m=+159.443495728" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.521624 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cx5vm"] Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.564189 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87"] Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.572723 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:22 crc kubenswrapper[4832]: E1002 18:23:22.573297 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:23.073246367 +0000 UTC m=+160.042689239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.579787 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw"] Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.608972 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq"] Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.673696 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:22 crc kubenswrapper[4832]: E1002 18:23:22.673931 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:23.173900803 +0000 UTC m=+160.143343675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.673978 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-nk8bt" podStartSLOduration=129.673959884 podStartE2EDuration="2m9.673959884s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:22.672694165 +0000 UTC m=+159.642137047" watchObservedRunningTime="2025-10-02 18:23:22.673959884 +0000 UTC m=+159.643402756" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.674438 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:22 crc kubenswrapper[4832]: E1002 18:23:22.676707 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:23.176692259 +0000 UTC m=+160.146135121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.760656 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" podStartSLOduration=129.760631013 podStartE2EDuration="2m9.760631013s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:22.750227888 +0000 UTC m=+159.719670760" watchObservedRunningTime="2025-10-02 18:23:22.760631013 +0000 UTC m=+159.730073885" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.777149 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:22 crc kubenswrapper[4832]: E1002 18:23:22.777591 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:23.277556142 +0000 UTC m=+160.246999014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.877081 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-49qxn" podStartSLOduration=129.877062273 podStartE2EDuration="2m9.877062273s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:22.876210225 +0000 UTC m=+159.845653097" watchObservedRunningTime="2025-10-02 18:23:22.877062273 +0000 UTC m=+159.846505145" Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.878250 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:22 crc kubenswrapper[4832]: E1002 18:23:22.878549 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:23.378538608 +0000 UTC m=+160.347981480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.979657 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:22 crc kubenswrapper[4832]: E1002 18:23:22.979852 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:23.479823184 +0000 UTC m=+160.449266056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:22 crc kubenswrapper[4832]: E1002 18:23:22.980426 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:23.480417783 +0000 UTC m=+160.449860655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:22 crc kubenswrapper[4832]: I1002 18:23:22.985577 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.008158 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p"] Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.017420 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9r99z"] Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.046621 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hdqc9"] Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.086431 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:23 crc kubenswrapper[4832]: E1002 18:23:23.088039 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:23.588000836 +0000 UTC m=+160.557443708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.101161 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx"] Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.105324 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb"] Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.120888 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn"] Oct 02 18:23:23 crc kubenswrapper[4832]: W1002 18:23:23.151617 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21b137f9_b50f_4437_a188_c7303af83cb6.slice/crio-8986ba25db3cfe25ec1798cfd148dcb2270e25d187ffd67cb676e7277052a46e WatchSource:0}: Error finding container 8986ba25db3cfe25ec1798cfd148dcb2270e25d187ffd67cb676e7277052a46e: Status 404 returned error can't find the container with id 8986ba25db3cfe25ec1798cfd148dcb2270e25d187ffd67cb676e7277052a46e Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.161486 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp"] Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.182677 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m4vv7"] Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.188987 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:23 crc kubenswrapper[4832]: E1002 18:23:23.189372 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:23.689361343 +0000 UTC m=+160.658804215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:23 crc kubenswrapper[4832]: W1002 18:23:23.208833 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod559b884d_1de9_433d_96a6_fc1f8b3622d4.slice/crio-9a7d62a3a7800811d9a26e2f597fec2d6e1fde1606d4db05be4f4b7f222307d1 WatchSource:0}: Error finding container 9a7d62a3a7800811d9a26e2f597fec2d6e1fde1606d4db05be4f4b7f222307d1: Status 404 returned error can't find the container with id 9a7d62a3a7800811d9a26e2f597fec2d6e1fde1606d4db05be4f4b7f222307d1 Oct 02 18:23:23 crc kubenswrapper[4832]: W1002 18:23:23.223319 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79047375_b11f_4aa6_ae05_1bf9981b7da7.slice/crio-41544576d3b922e9552172ac7d04ecf0eb0c07f2e2d27134a7479ebb57afd8a6 WatchSource:0}: Error finding container 41544576d3b922e9552172ac7d04ecf0eb0c07f2e2d27134a7479ebb57afd8a6: Status 404 returned error can't find the container with id 41544576d3b922e9552172ac7d04ecf0eb0c07f2e2d27134a7479ebb57afd8a6 Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.247256 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2"] Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.247463 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p" event={"ID":"b89be286-e9b7-43b2-97d1-222740bca95a","Type":"ContainerStarted","Data":"1aceb5b862ebb03b3f4b0f759bb8341c9eb56d8e4af8779932f754054d6be907"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.247542 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65"] Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.250499 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw" event={"ID":"0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2","Type":"ContainerStarted","Data":"15889ba3b150a257497b7ae5bda20de33ded863be6feafd3bc68b0bb93e5c955"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.255454 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" event={"ID":"b014bcbd-189a-4310-9105-00e0fd0f624b","Type":"ContainerStarted","Data":"d786972cffd38261c0d3dd1aa1dd1c18499ca56fd1d9329c75669b7e201722d2"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.296196 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:23 crc kubenswrapper[4832]: E1002 18:23:23.296715 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:23.796698788 +0000 UTC m=+160.766141660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.304293 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fjcjg" event={"ID":"7860295f-4280-4d00-acc0-119ded425125","Type":"ContainerStarted","Data":"62a96b2400cb59353a71cd7dc7dc099d941225e1cdfcc1798a955b1061226541"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.334094 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq" event={"ID":"f6eebdfd-4211-4035-bd7e-3a689cf6528c","Type":"ContainerStarted","Data":"f6aae4b2f4a0893af647bc947d24b7cf56363df84bf3a7ab3976256ead9740a7"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.343000 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-7ffgv" podStartSLOduration=130.342980845 podStartE2EDuration="2m10.342980845s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:23.320918576 +0000 UTC m=+160.290361448" watchObservedRunningTime="2025-10-02 18:23:23.342980845 +0000 UTC m=+160.312423717" Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.358572 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7q5cd" event={"ID":"934c65a0-d9a8-484b-828e-b5b5db8b9575","Type":"ContainerStarted","Data":"13c3b27714cdd1707447573426d2ad3f7fd48ff725c81cdc61df11a85dc069af"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.359322 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.362948 4832 patch_prober.go:28] interesting pod/console-operator-58897d9998-7q5cd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.362992 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-7q5cd" podUID="934c65a0-d9a8-484b-828e-b5b5db8b9575" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.371796 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87" event={"ID":"93a9647e-3a6c-463d-826d-48254cc4ea1f","Type":"ContainerStarted","Data":"3a196d7b3c4103d64466fc091d0f29bfe7f97c1e52e216c66b3d72ea8c03e808"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.393072 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" event={"ID":"320193d7-edcc-4e8e-8e95-8da631ea5a64","Type":"ContainerStarted","Data":"3f5f363956bedd1b1f10ad38acb14304f5a37392d23a225dc39dc8bc67f0746a"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.397971 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:23 crc kubenswrapper[4832]: E1002 18:23:23.398309 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:23.898293264 +0000 UTC m=+160.867736136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.399325 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9r99z" event={"ID":"21b137f9-b50f-4437-a188-c7303af83cb6","Type":"ContainerStarted","Data":"8986ba25db3cfe25ec1798cfd148dcb2270e25d187ffd67cb676e7277052a46e"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.402531 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d" event={"ID":"cb287078-753a-4a35-b491-25ccc9c614a3","Type":"ContainerStarted","Data":"4508d6a6453054ceb43a004e539b276179bc194e9fb4f417f063c5566b62d1a8"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.426606 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8xkdn"] Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.429123 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-jth5v" event={"ID":"e8b95c95-e0ac-476b-9a52-1e22bb23a540","Type":"ContainerStarted","Data":"f58897e632b362c3b27b48eebc2aaa50c3a0b8532bead51c543fb776418a514c"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.431521 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tpvsc"] Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.444081 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx5vm" event={"ID":"0f4ffc00-40b8-4dd1-9ebc-775e5cce2490","Type":"ContainerStarted","Data":"e2fa71822db1a7f70804b693bdc3bf752ae930150dabbf48673c64157a1ea32d"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.447156 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986" event={"ID":"d71e7774-4bb0-42af-bba6-7473a9500d1f","Type":"ContainerStarted","Data":"c498e1f8c853f09b40e39f8c62e1874de818933b3b6a29e7093e6049a03fdfcc"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.447206 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986" event={"ID":"d71e7774-4bb0-42af-bba6-7473a9500d1f","Type":"ContainerStarted","Data":"57ed2b56cbd00db2a05586bc64bb50ee8450971f44422b9237359b846cdaf8b0"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.451941 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" event={"ID":"6d7bebee-b537-4cf4-b00e-1051dac6aed6","Type":"ContainerStarted","Data":"2458f47f0491e94a5d185d1f4da83528f08afe52c2e88a3a5abaee3dfb61a955"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.462085 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" event={"ID":"4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d","Type":"ContainerStarted","Data":"13400d2ccb3ed24e77eaf09e9f3600cbf6696c2089b91ebf74e8ac9643133c78"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.480212 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc2q"] Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.488634 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2" event={"ID":"fc7dc5fa-f826-40b7-b05f-7b8ed10452d4","Type":"ContainerStarted","Data":"990c1d3a1ed6ddfed41c298fefc82b066a1dbb9d3212023972b7d19492e1429a"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.499754 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tshzp"] Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.502192 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm"] Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.502737 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:23 crc kubenswrapper[4832]: E1002 18:23:23.503254 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:24.003238673 +0000 UTC m=+160.972681545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.503527 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:23 crc kubenswrapper[4832]: E1002 18:23:23.506172 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:24.006162755 +0000 UTC m=+160.975605627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.507496 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4jc2v" event={"ID":"665e1554-20bb-4238-8e92-b7eb966fddc7","Type":"ContainerStarted","Data":"0de3c8e3afdf8d47b0267d47a6139d4772230e6e903e2822eb7069bbcd6c9576"} Oct 02 18:23:23 crc kubenswrapper[4832]: W1002 18:23:23.511715 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a126c26_a7cd_48cf_8998_2f63af48e305.slice/crio-711168486fdf78e500c5d8d60978bf1db703ec9d0b7686d807cfcae3a0331ad0 WatchSource:0}: Error finding container 711168486fdf78e500c5d8d60978bf1db703ec9d0b7686d807cfcae3a0331ad0: Status 404 returned error can't find the container with id 711168486fdf78e500c5d8d60978bf1db703ec9d0b7686d807cfcae3a0331ad0 Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.515205 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-k5cpd" event={"ID":"ee435d24-9de6-4a34-80e7-044ae5bc1bef","Type":"ContainerStarted","Data":"e5a16a5831b7a81c1da37bb0b870b29fc4f58680538fe1de26a780c0b56c5497"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.527164 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" event={"ID":"4a81f0de-a11b-4652-a0fd-87468ce2e04d","Type":"ContainerStarted","Data":"447e9dd08a7199ade33aa0a502ba67eea111f8dd3792cad695bf3c58ac75a506"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.532813 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" event={"ID":"37a9ae43-eef2-461a-a58d-c32b18ae74bc","Type":"ContainerStarted","Data":"aec187d0aa3cf4aa05d35e1d64afe96235c8dacf26120cd9f74e6993e5daaa4a"} Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.533530 4832 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-w89r8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.533573 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" podUID="a7c249ac-abfd-42b2-b391-5018d1695100" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.535199 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-nk8bt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.535239 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nk8bt" podUID="c501c3e2-851d-452a-9fd1-0cdb21ac15e6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Oct 02 18:23:23 crc kubenswrapper[4832]: W1002 18:23:23.559792 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod563b1a80_432b_4eb1_b3d5_cf2843736168.slice/crio-9f493cc5d3d7c5dbb3a5cfc4ba047d50783131e38ab5d41e8816ef7a552256f0 WatchSource:0}: Error finding container 9f493cc5d3d7c5dbb3a5cfc4ba047d50783131e38ab5d41e8816ef7a552256f0: Status 404 returned error can't find the container with id 9f493cc5d3d7c5dbb3a5cfc4ba047d50783131e38ab5d41e8816ef7a552256f0 Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.569550 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.608677 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:23 crc kubenswrapper[4832]: E1002 18:23:23.609032 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:24.10901324 +0000 UTC m=+161.078456112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.609352 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.611406 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw"] Oct 02 18:23:23 crc kubenswrapper[4832]: E1002 18:23:23.612295 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:24.112284082 +0000 UTC m=+161.081726954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.673791 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-8fkn8" podStartSLOduration=130.673768234 podStartE2EDuration="2m10.673768234s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:23.672015459 +0000 UTC m=+160.641458321" watchObservedRunningTime="2025-10-02 18:23:23.673768234 +0000 UTC m=+160.643211106" Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.674037 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" podStartSLOduration=130.674031832 podStartE2EDuration="2m10.674031832s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:23.630666767 +0000 UTC m=+160.600109639" watchObservedRunningTime="2025-10-02 18:23:23.674031832 +0000 UTC m=+160.643474704" Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.710641 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:23 crc kubenswrapper[4832]: E1002 18:23:23.712077 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:24.212051761 +0000 UTC m=+161.181494713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.812404 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:23 crc kubenswrapper[4832]: E1002 18:23:23.812721 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:24.312710067 +0000 UTC m=+161.282152939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.859967 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.906997 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-4jc2v" podStartSLOduration=5.906980083 podStartE2EDuration="5.906980083s" podCreationTimestamp="2025-10-02 18:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:23.90624351 +0000 UTC m=+160.875686382" watchObservedRunningTime="2025-10-02 18:23:23.906980083 +0000 UTC m=+160.876422955" Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.907609 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" podStartSLOduration=130.907605133 podStartE2EDuration="2m10.907605133s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:23.876435549 +0000 UTC m=+160.845878421" watchObservedRunningTime="2025-10-02 18:23:23.907605133 +0000 UTC m=+160.877048005" Oct 02 18:23:23 crc kubenswrapper[4832]: I1002 18:23:23.912985 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:23 crc kubenswrapper[4832]: E1002 18:23:23.913406 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:24.413390413 +0000 UTC m=+161.382833285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.016797 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:24 crc kubenswrapper[4832]: E1002 18:23:24.017082 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:24.517071554 +0000 UTC m=+161.486514426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.030765 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4c986" podStartSLOduration=131.030747691 podStartE2EDuration="2m11.030747691s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.026140397 +0000 UTC m=+160.995583269" watchObservedRunningTime="2025-10-02 18:23:24.030747691 +0000 UTC m=+161.000190563" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.069457 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-7q5cd" podStartSLOduration=131.069438901 podStartE2EDuration="2m11.069438901s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.069005067 +0000 UTC m=+161.038447939" watchObservedRunningTime="2025-10-02 18:23:24.069438901 +0000 UTC m=+161.038881773" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.113540 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-86vg4" podStartSLOduration=131.113524969 podStartE2EDuration="2m11.113524969s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.112059093 +0000 UTC m=+161.081501965" watchObservedRunningTime="2025-10-02 18:23:24.113524969 +0000 UTC m=+161.082967841" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.117412 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:24 crc kubenswrapper[4832]: E1002 18:23:24.117789 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:24.617774811 +0000 UTC m=+161.587217683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.156898 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d" podStartSLOduration=131.156876953 podStartE2EDuration="2m11.156876953s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.152811256 +0000 UTC m=+161.122254128" watchObservedRunningTime="2025-10-02 18:23:24.156876953 +0000 UTC m=+161.126319825" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.205503 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.209004 4832 patch_prober.go:28] interesting pod/router-default-5444994796-k5cpd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.209062 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5cpd" podUID="ee435d24-9de6-4a34-80e7-044ae5bc1bef" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.216982 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-w5ngm" podStartSLOduration=131.216966342 podStartE2EDuration="2m11.216966342s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.215584009 +0000 UTC m=+161.185026881" watchObservedRunningTime="2025-10-02 18:23:24.216966342 +0000 UTC m=+161.186409214" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.218832 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:24 crc kubenswrapper[4832]: E1002 18:23:24.219292 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:24.719277974 +0000 UTC m=+161.688720846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.303706 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-jth5v" podStartSLOduration=131.303690802 podStartE2EDuration="2m11.303690802s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.277203475 +0000 UTC m=+161.246646347" watchObservedRunningTime="2025-10-02 18:23:24.303690802 +0000 UTC m=+161.273133674" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.328739 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:24 crc kubenswrapper[4832]: E1002 18:23:24.328870 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:24.828838498 +0000 UTC m=+161.798281380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.328957 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:24 crc kubenswrapper[4832]: E1002 18:23:24.329330 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:24.829318654 +0000 UTC m=+161.798761526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.345364 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-k5cpd" podStartSLOduration=131.345344715 podStartE2EDuration="2m11.345344715s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.304714635 +0000 UTC m=+161.274157507" watchObservedRunningTime="2025-10-02 18:23:24.345344715 +0000 UTC m=+161.314787587" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.429700 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:24 crc kubenswrapper[4832]: E1002 18:23:24.429915 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:24.929899937 +0000 UTC m=+161.899342809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.530934 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:24 crc kubenswrapper[4832]: E1002 18:23:24.531549 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:25.031535254 +0000 UTC m=+162.000978126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.553753 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc2q" event={"ID":"5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce","Type":"ContainerStarted","Data":"2f734524cf4b37023d231c11f4bf922bd484ce14b6be6ec133c0b35a37e16316"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.577395 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" event={"ID":"cca10a2e-3045-4696-9f52-263ff39d8101","Type":"ContainerStarted","Data":"9dbdb6671491b21c6d81485192bd23fa1776e71a529b0a398690073ba4c54910"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.582996 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" event={"ID":"3629d037-0605-455a-8846-b96b543f8ee6","Type":"ContainerStarted","Data":"e26826153febea2c00b84cbe903fe0e8acd11d82fd35ad8e151c023b1a666f96"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.583040 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" event={"ID":"3629d037-0605-455a-8846-b96b543f8ee6","Type":"ContainerStarted","Data":"02802bb5056816251c476c69c2e46233a43bafec8222992eaf05ea32a04bf3a1"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.583841 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.591479 4832 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7z6z2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.591520 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" podUID="3629d037-0605-455a-8846-b96b543f8ee6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.601953 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" event={"ID":"e1ccc88e-b013-4c52-92b1-6e6462492c3c","Type":"ContainerStarted","Data":"0568b56757ae8e63f0bed694be35690c26741d0c04f5494ee735a4b4018100a5"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.602009 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" event={"ID":"e1ccc88e-b013-4c52-92b1-6e6462492c3c","Type":"ContainerStarted","Data":"3058a08db5f63e1af7a80ae3ee9a111416244f1160797f5eb1181f182e568cce"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.603123 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.608446 4832 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-m4vv7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.608492 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" podUID="e1ccc88e-b013-4c52-92b1-6e6462492c3c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.610789 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" podStartSLOduration=131.61077008 podStartE2EDuration="2m11.61077008s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.609186221 +0000 UTC m=+161.578629093" watchObservedRunningTime="2025-10-02 18:23:24.61077008 +0000 UTC m=+161.580212952" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.613738 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tshzp" event={"ID":"563b1a80-432b-4eb1-b3d5-cf2843736168","Type":"ContainerStarted","Data":"9f493cc5d3d7c5dbb3a5cfc4ba047d50783131e38ab5d41e8816ef7a552256f0"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.617779 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" event={"ID":"03b5a76f-8ef9-4b85-841f-2c4a3011d71b","Type":"ContainerStarted","Data":"fa8ac8989bdb7aa5d0748caae9563e095666d355bf193970c9458d234763f6a6"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.632285 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:24 crc kubenswrapper[4832]: E1002 18:23:24.633279 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:25.133242043 +0000 UTC m=+162.102684915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.645975 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" event={"ID":"b014bcbd-189a-4310-9105-00e0fd0f624b","Type":"ContainerStarted","Data":"1e637d180bf8d3674d1aa6d9dfc86d687656eb05db0a2fa3b621416366de0432"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.685609 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" podStartSLOduration=131.685588219 podStartE2EDuration="2m11.685588219s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.683321518 +0000 UTC m=+161.652764410" watchObservedRunningTime="2025-10-02 18:23:24.685588219 +0000 UTC m=+161.655031091" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.714631 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" event={"ID":"79047375-b11f-4aa6-ae05-1bf9981b7da7","Type":"ContainerStarted","Data":"d0ea9ea6f755eea7695cae9e186bf83ee79babbea2462023ea26111d1f62b11d"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.714716 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" event={"ID":"79047375-b11f-4aa6-ae05-1bf9981b7da7","Type":"ContainerStarted","Data":"41544576d3b922e9552172ac7d04ecf0eb0c07f2e2d27134a7479ebb57afd8a6"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.715095 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.733140 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-k5cpd" event={"ID":"ee435d24-9de6-4a34-80e7-044ae5bc1bef","Type":"ContainerStarted","Data":"c22e18657abbd179f9f2d17539f2a3a25112112ef3fdd840c09bd48576f4eaf2"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.767338 4832 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tfxbb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.767390 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" podUID="79047375-b11f-4aa6-ae05-1bf9981b7da7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.768822 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:24 crc kubenswrapper[4832]: E1002 18:23:24.772008 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:25.27199197 +0000 UTC m=+162.241434842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.785857 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hdqc9" podStartSLOduration=131.785842313 podStartE2EDuration="2m11.785842313s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.732725382 +0000 UTC m=+161.702168254" watchObservedRunningTime="2025-10-02 18:23:24.785842313 +0000 UTC m=+161.755285185" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.787936 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65" event={"ID":"e1fc3207-2d0d-48f2-aa03-7136bcd5823f","Type":"ContainerStarted","Data":"1ef2d067324a4ed7c795e1dee478ad29e413259dd1a2df67da73e4add6ae9666"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.798124 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8xkdn" event={"ID":"6a126c26-a7cd-48cf-8998-2f63af48e305","Type":"ContainerStarted","Data":"711168486fdf78e500c5d8d60978bf1db703ec9d0b7686d807cfcae3a0331ad0"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.836659 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq" event={"ID":"f6eebdfd-4211-4035-bd7e-3a689cf6528c","Type":"ContainerStarted","Data":"8eabc9a97ffd06fa8f204998814350abbc3285cf26fb64494cc7f91a668dbd54"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.860726 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx" event={"ID":"559b884d-1de9-433d-96a6-fc1f8b3622d4","Type":"ContainerStarted","Data":"388476c4af828c5f31cd52424715b03ce14a0d0f5307b4bda21f3bbbca22ba4e"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.860773 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx" event={"ID":"559b884d-1de9-433d-96a6-fc1f8b3622d4","Type":"ContainerStarted","Data":"9a7d62a3a7800811d9a26e2f597fec2d6e1fde1606d4db05be4f4b7f222307d1"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.873521 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d6wcq" podStartSLOduration=131.873505483 podStartE2EDuration="2m11.873505483s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.872705717 +0000 UTC m=+161.842148589" watchObservedRunningTime="2025-10-02 18:23:24.873505483 +0000 UTC m=+161.842948355" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.875217 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" podStartSLOduration=131.875209606 podStartE2EDuration="2m11.875209606s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.790282392 +0000 UTC m=+161.759725264" watchObservedRunningTime="2025-10-02 18:23:24.875209606 +0000 UTC m=+161.844652478" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.886040 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw" event={"ID":"0fc57bce-fad5-4ee2-9cf7-7a61853d6fa2","Type":"ContainerStarted","Data":"d122e2d6458eb246158d39ff1957561f5f2c9e6f1eb3fa78ba97782efcd16efe"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.887199 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:24 crc kubenswrapper[4832]: E1002 18:23:24.888482 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:25.38846464 +0000 UTC m=+162.357907512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.922467 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx5vm" event={"ID":"0f4ffc00-40b8-4dd1-9ebc-775e5cce2490","Type":"ContainerStarted","Data":"6c907bb36836b4f8c7b910a380cb93f55e791e192f0bc18d9848e4f91b54988a"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.936784 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k6rmw" podStartSLOduration=131.93676774 podStartE2EDuration="2m11.93676774s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.936238053 +0000 UTC m=+161.905680925" watchObservedRunningTime="2025-10-02 18:23:24.93676774 +0000 UTC m=+161.906210612" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.938253 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4ncqx" podStartSLOduration=131.938246636 podStartE2EDuration="2m11.938246636s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.90510509 +0000 UTC m=+161.874547962" watchObservedRunningTime="2025-10-02 18:23:24.938246636 +0000 UTC m=+161.907689508" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.953162 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" event={"ID":"5845d078-31dd-48b9-a1f8-b3cde570370c","Type":"ContainerStarted","Data":"98a09060629126293b6cf6697c8fbfdf2a42bca3f3009ba8e3b7a4f0ccb3f58c"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.953209 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" event={"ID":"5845d078-31dd-48b9-a1f8-b3cde570370c","Type":"ContainerStarted","Data":"b5157954dd0124b06e5e2ed97cbe8a8d401ab43951e7f43a95153700ce82c021"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.954208 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.960457 4832 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-v6kwp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.960531 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" podUID="5845d078-31dd-48b9-a1f8-b3cde570370c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.977069 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9r99z" event={"ID":"21b137f9-b50f-4437-a188-c7303af83cb6","Type":"ContainerStarted","Data":"1b57bfe0a319573553eee21ebff5634fe62bad3244efa1c1e14c587ee9a3dc7c"} Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.979616 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx5vm" podStartSLOduration=131.979502675 podStartE2EDuration="2m11.979502675s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:24.977134962 +0000 UTC m=+161.946577834" watchObservedRunningTime="2025-10-02 18:23:24.979502675 +0000 UTC m=+161.948945547" Oct 02 18:23:24 crc kubenswrapper[4832]: I1002 18:23:24.988500 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:24 crc kubenswrapper[4832]: E1002 18:23:24.992485 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:25.492470261 +0000 UTC m=+162.461913133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.007530 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-jth5v" event={"ID":"e8b95c95-e0ac-476b-9a52-1e22bb23a540","Type":"ContainerStarted","Data":"a759e47fd5b4015d3a58dff439b70e10129e4dac748d32f5184ad9a9c7b3c749"} Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.015460 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" podStartSLOduration=132.015437839 podStartE2EDuration="2m12.015437839s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:25.013736866 +0000 UTC m=+161.983179738" watchObservedRunningTime="2025-10-02 18:23:25.015437839 +0000 UTC m=+161.984880711" Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.042037 4832 generic.go:334] "Generic (PLEG): container finished" podID="07068ae6-441b-4211-bd1f-e219157b4bb2" containerID="b2138864cc81e5cb4c3d22d52afa8bf46010c09fc576af56a80cf71cb14c1bd3" exitCode=0 Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.042117 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" event={"ID":"07068ae6-441b-4211-bd1f-e219157b4bb2","Type":"ContainerDied","Data":"b2138864cc81e5cb4c3d22d52afa8bf46010c09fc576af56a80cf71cb14c1bd3"} Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.047458 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-prv7d" event={"ID":"cb287078-753a-4a35-b491-25ccc9c614a3","Type":"ContainerStarted","Data":"8c745a9b226bb921021e844079c5b3fccf6f0a8e90f2b45b258a1fa34083042e"} Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.080441 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" event={"ID":"4a81f0de-a11b-4652-a0fd-87468ce2e04d","Type":"ContainerStarted","Data":"ffc529fc2f5cd976d69e3f6be6a28890dfe51f523c8e5f27a2149f64ecb464d1"} Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.080489 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" event={"ID":"4a81f0de-a11b-4652-a0fd-87468ce2e04d","Type":"ContainerStarted","Data":"610be44cbfc6124bcd7b665c6dc1fcf93f04b12982c209693185950d2aec47b5"} Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.096999 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:25 crc kubenswrapper[4832]: E1002 18:23:25.097206 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:25.597162854 +0000 UTC m=+162.566605726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.097974 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:25 crc kubenswrapper[4832]: E1002 18:23:25.100059 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:25.600048083 +0000 UTC m=+162.569490955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.106371 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52bdc" podStartSLOduration=132.106354101 podStartE2EDuration="2m12.106354101s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:25.105618518 +0000 UTC m=+162.075061390" watchObservedRunningTime="2025-10-02 18:23:25.106354101 +0000 UTC m=+162.075796973" Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.108731 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" event={"ID":"042f796e-c81a-4fd3-898c-ca596ed62bd5","Type":"ContainerStarted","Data":"bfb9da0b71f0f13c886d2d6d90407c711bdf3fb9b45d456db2ffd3a6f97c73a7"} Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.144150 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2" event={"ID":"fc7dc5fa-f826-40b7-b05f-7b8ed10452d4","Type":"ContainerStarted","Data":"e7f716a4f0c63a5260bb6509c406b523fa6f89ec876ec14e215d9ea2aa441d83"} Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.144208 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2" event={"ID":"fc7dc5fa-f826-40b7-b05f-7b8ed10452d4","Type":"ContainerStarted","Data":"51a4601c67ba24a04c53eb2030a39b1fdda1901588e56417bdaef4899336c5d5"} Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.153474 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fjcjg" event={"ID":"7860295f-4280-4d00-acc0-119ded425125","Type":"ContainerStarted","Data":"e3edd98fedbf01ed926333cd97dbb9b691007800e9650840bab799bb45316fd3"} Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.154133 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.156881 4832 generic.go:334] "Generic (PLEG): container finished" podID="4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d" containerID="13400d2ccb3ed24e77eaf09e9f3600cbf6696c2089b91ebf74e8ac9643133c78" exitCode=0 Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.157229 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" event={"ID":"4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d","Type":"ContainerDied","Data":"13400d2ccb3ed24e77eaf09e9f3600cbf6696c2089b91ebf74e8ac9643133c78"} Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.158491 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p" event={"ID":"b89be286-e9b7-43b2-97d1-222740bca95a","Type":"ContainerStarted","Data":"9e6fbe2f975a23af6191c95b220090edfb6396ba504fb5eb856490e941b55ad8"} Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.176629 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87" event={"ID":"93a9647e-3a6c-463d-826d-48254cc4ea1f","Type":"ContainerStarted","Data":"b7165a7aa689d31bfefe85a9bd3dd2f8c49f1c7d1e36fbd4ce49f6e6a5e551e7"} Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.196380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" event={"ID":"e291aef6-bbde-41a5-9981-96b992547e03","Type":"ContainerStarted","Data":"64d284757ed2c12f01897739b3e186e8a14c4692a20fe726fd8d3c98fe203c77"} Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.196417 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" event={"ID":"e291aef6-bbde-41a5-9981-96b992547e03","Type":"ContainerStarted","Data":"8ee57580f6730db125a9e82bfee279a69946b9d74a7cb98a8e8795e3e369973c"} Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.201315 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:25 crc kubenswrapper[4832]: E1002 18:23:25.202873 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:25.702855957 +0000 UTC m=+162.672298829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.214011 4832 patch_prober.go:28] interesting pod/router-default-5444994796-k5cpd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 18:23:25 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Oct 02 18:23:25 crc kubenswrapper[4832]: [+]process-running ok Oct 02 18:23:25 crc kubenswrapper[4832]: healthz check failed Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.214076 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5cpd" podUID="ee435d24-9de6-4a34-80e7-044ae5bc1bef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.214781 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-fjcjg" podStartSLOduration=7.214761899 podStartE2EDuration="7.214761899s" podCreationTimestamp="2025-10-02 18:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:25.213300273 +0000 UTC m=+162.182743155" watchObservedRunningTime="2025-10-02 18:23:25.214761899 +0000 UTC m=+162.184204771" Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.216046 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xjgz2" podStartSLOduration=132.216037749 podStartE2EDuration="2m12.216037749s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:25.172685054 +0000 UTC m=+162.142127926" watchObservedRunningTime="2025-10-02 18:23:25.216037749 +0000 UTC m=+162.185480621" Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.276902 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" podStartSLOduration=132.276882131 podStartE2EDuration="2m12.276882131s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:25.272947168 +0000 UTC m=+162.242390040" watchObservedRunningTime="2025-10-02 18:23:25.276882131 +0000 UTC m=+162.246325003" Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.304663 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:25 crc kubenswrapper[4832]: E1002 18:23:25.308737 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:25.808721246 +0000 UTC m=+162.778164118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.321627 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-qcz87" podStartSLOduration=132.321596688 podStartE2EDuration="2m12.321596688s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:25.320642978 +0000 UTC m=+162.290085850" watchObservedRunningTime="2025-10-02 18:23:25.321596688 +0000 UTC m=+162.291039560" Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.409477 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:25 crc kubenswrapper[4832]: E1002 18:23:25.409768 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:25.909754743 +0000 UTC m=+162.879197615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.514338 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:25 crc kubenswrapper[4832]: E1002 18:23:25.514640 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:26.014629252 +0000 UTC m=+162.984072124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.620793 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:25 crc kubenswrapper[4832]: E1002 18:23:25.621304 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:26.121289785 +0000 UTC m=+163.090732657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.722111 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:25 crc kubenswrapper[4832]: E1002 18:23:25.722723 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:26.222710915 +0000 UTC m=+163.192153787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.728725 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-7q5cd" Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.823012 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:25 crc kubenswrapper[4832]: E1002 18:23:25.823312 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:26.3232957 +0000 UTC m=+163.292738572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.847842 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.848530 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.882246 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:25 crc kubenswrapper[4832]: I1002 18:23:25.925898 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:25 crc kubenswrapper[4832]: E1002 18:23:25.926229 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:26.426218946 +0000 UTC m=+163.395661818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.027730 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:26 crc kubenswrapper[4832]: E1002 18:23:26.028476 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:26.528461471 +0000 UTC m=+163.497904343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.129652 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:26 crc kubenswrapper[4832]: E1002 18:23:26.129981 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:26.629965625 +0000 UTC m=+163.599408497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.204131 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9r99z" event={"ID":"21b137f9-b50f-4437-a188-c7303af83cb6","Type":"ContainerStarted","Data":"920c55e1716206456004f0818f9b574796582b6b00eb98ace112dee9b9cb4f16"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.206161 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8xkdn" event={"ID":"6a126c26-a7cd-48cf-8998-2f63af48e305","Type":"ContainerStarted","Data":"87d558eef27d0189e923bf5320790fb757269b5206912096f6197d325d378110"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.207899 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" event={"ID":"03b5a76f-8ef9-4b85-841f-2c4a3011d71b","Type":"ContainerStarted","Data":"270f6381b303b0c98b59a98583a5c7e7b5d12155471f98c36a60a1d9f5043c53"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.207927 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" event={"ID":"03b5a76f-8ef9-4b85-841f-2c4a3011d71b","Type":"ContainerStarted","Data":"c771b895ccf3d3a384aa4aa57d18b7d94f916d3ae49c3d1a4419442ede8bb864"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.209571 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p" event={"ID":"b89be286-e9b7-43b2-97d1-222740bca95a","Type":"ContainerStarted","Data":"0d4eddf923a3196782f178bf3b390b7a9fb3aeda328b85711fa053013fbea202"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.209711 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.211244 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" event={"ID":"042f796e-c81a-4fd3-898c-ca596ed62bd5","Type":"ContainerStarted","Data":"9a8b1976b014c7d4aee42a9d4aa75420880fa8d682e710d67a587f3624847e8e"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.212701 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tshzp" event={"ID":"563b1a80-432b-4eb1-b3d5-cf2843736168","Type":"ContainerStarted","Data":"b7878c539010fb329cea2cf5640d91d5529f8eab505a447bbb74d8265ec356e9"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.212727 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tshzp" event={"ID":"563b1a80-432b-4eb1-b3d5-cf2843736168","Type":"ContainerStarted","Data":"e4979d869120d3a9300c07eb30733c7f684fc3b4410d8fd5a1636ca1ad0cb67e"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.214529 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65" event={"ID":"e1fc3207-2d0d-48f2-aa03-7136bcd5823f","Type":"ContainerStarted","Data":"7c737113d957893c1c90f745623dd1b64546517d2ef12a4c1e27ad4284603c33"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.214913 4832 patch_prober.go:28] interesting pod/router-default-5444994796-k5cpd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 18:23:26 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Oct 02 18:23:26 crc kubenswrapper[4832]: [+]process-running ok Oct 02 18:23:26 crc kubenswrapper[4832]: healthz check failed Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.214966 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5cpd" podUID="ee435d24-9de6-4a34-80e7-044ae5bc1bef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.215919 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc2q" event={"ID":"5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce","Type":"ContainerStarted","Data":"55a407505ac5b7dec5bcaffc992ba1a73cbcbe5bbb64bb63af08df17f2d1f623"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.217301 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" event={"ID":"cca10a2e-3045-4696-9f52-263ff39d8101","Type":"ContainerStarted","Data":"2e59c7a7056aa678d5113c1fde7e74293c9ebc4a89cb18822db1b8ae4a844bd3"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.217327 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" event={"ID":"cca10a2e-3045-4696-9f52-263ff39d8101","Type":"ContainerStarted","Data":"f0b414413208d07fadd3f3031f1152f24b583784c2153cbeddb751db044ae8db"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.218725 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cx5vm" event={"ID":"0f4ffc00-40b8-4dd1-9ebc-775e5cce2490","Type":"ContainerStarted","Data":"9bd4e1bd05a36a1c2dad3e9c009039d716a48674ecb79f60595ddec3facf5483"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.220831 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" event={"ID":"07068ae6-441b-4211-bd1f-e219157b4bb2","Type":"ContainerStarted","Data":"50f722375b53b61514b20427589cb448f604d781513aabe693a7c85fc281432f"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.220866 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" event={"ID":"07068ae6-441b-4211-bd1f-e219157b4bb2","Type":"ContainerStarted","Data":"67ab277ff6fcba24fb3050174f8ce433bcf7800986fe58789a6a52b352a3833c"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.222804 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" event={"ID":"4ebab0a6-afb8-408d-aa2b-d341e8eb2f6d","Type":"ContainerStarted","Data":"9d9eb19ee009eae42164b88f6a2470af9af544b893551fec2c252337ea3afdf7"} Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.224429 4832 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-m4vv7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.224468 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" podUID="e1ccc88e-b013-4c52-92b1-6e6462492c3c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.230628 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:26 crc kubenswrapper[4832]: E1002 18:23:26.230971 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:26.730944581 +0000 UTC m=+163.700387463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.231062 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:26 crc kubenswrapper[4832]: E1002 18:23:26.231503 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:26.731493048 +0000 UTC m=+163.700935920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.240463 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tfxbb" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.248348 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vw26f" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.285075 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v6kwp" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.319495 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" podStartSLOduration=133.319469798 podStartE2EDuration="2m13.319469798s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:26.318967711 +0000 UTC m=+163.288410583" watchObservedRunningTime="2025-10-02 18:23:26.319469798 +0000 UTC m=+163.288912670" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.320180 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9r99z" podStartSLOduration=133.320175499 podStartE2EDuration="2m13.320175499s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:26.262629071 +0000 UTC m=+163.232071943" watchObservedRunningTime="2025-10-02 18:23:26.320175499 +0000 UTC m=+163.289618371" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.335787 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:26 crc kubenswrapper[4832]: E1002 18:23:26.336190 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:26.83617099 +0000 UTC m=+163.805613892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.336928 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:26 crc kubenswrapper[4832]: E1002 18:23:26.352876 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:26.852855091 +0000 UTC m=+163.822297963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.442289 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:26 crc kubenswrapper[4832]: E1002 18:23:26.442564 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:26.942538464 +0000 UTC m=+163.911981336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.442904 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:26 crc kubenswrapper[4832]: E1002 18:23:26.443326 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:26.943313839 +0000 UTC m=+163.912756711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.485518 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nqc2q" podStartSLOduration=133.485501137 podStartE2EDuration="2m13.485501137s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:26.484683361 +0000 UTC m=+163.454126233" watchObservedRunningTime="2025-10-02 18:23:26.485501137 +0000 UTC m=+163.454944009" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.486137 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8xkdn" podStartSLOduration=8.486132817 podStartE2EDuration="8.486132817s" podCreationTimestamp="2025-10-02 18:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:26.441713258 +0000 UTC m=+163.411156130" watchObservedRunningTime="2025-10-02 18:23:26.486132817 +0000 UTC m=+163.455575689" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.507957 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.511440 4832 patch_prober.go:28] interesting pod/apiserver-76f77b778f-ljdjq container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.15:8443/livez\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.511516 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" podUID="07068ae6-441b-4211-bd1f-e219157b4bb2" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.15:8443/livez\": dial tcp 10.217.0.15:8443: connect: connection refused" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.529386 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.533749 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p" podStartSLOduration=133.533723114 podStartE2EDuration="2m13.533723114s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:26.532897688 +0000 UTC m=+163.502340560" watchObservedRunningTime="2025-10-02 18:23:26.533723114 +0000 UTC m=+163.503165986" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.548038 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:26 crc kubenswrapper[4832]: E1002 18:23:26.548563 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:27.048542697 +0000 UTC m=+164.017985569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.583961 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-tshzp" podStartSLOduration=133.583927163 podStartE2EDuration="2m13.583927163s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:26.578166243 +0000 UTC m=+163.547609115" watchObservedRunningTime="2025-10-02 18:23:26.583927163 +0000 UTC m=+163.553370035" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.642602 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-62fbw" podStartSLOduration=133.642576306 podStartE2EDuration="2m13.642576306s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:26.639675155 +0000 UTC m=+163.609118027" watchObservedRunningTime="2025-10-02 18:23:26.642576306 +0000 UTC m=+163.612019178" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.652664 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:26 crc kubenswrapper[4832]: E1002 18:23:26.653114 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:27.153096695 +0000 UTC m=+164.122539567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.691085 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" podStartSLOduration=133.691071543 podStartE2EDuration="2m13.691071543s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:26.689666868 +0000 UTC m=+163.659109750" watchObservedRunningTime="2025-10-02 18:23:26.691071543 +0000 UTC m=+163.660514415" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.755183 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:26 crc kubenswrapper[4832]: E1002 18:23:26.755609 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:27.255591278 +0000 UTC m=+164.225034160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.782528 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-smk65" podStartSLOduration=133.78251112 podStartE2EDuration="2m13.78251112s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:26.722492725 +0000 UTC m=+163.691935597" watchObservedRunningTime="2025-10-02 18:23:26.78251112 +0000 UTC m=+163.751953992" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.822740 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-glkgm" podStartSLOduration=133.822718316 podStartE2EDuration="2m13.822718316s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:26.819584629 +0000 UTC m=+163.789027521" watchObservedRunningTime="2025-10-02 18:23:26.822718316 +0000 UTC m=+163.792161188" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.857009 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:26 crc kubenswrapper[4832]: E1002 18:23:26.857377 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:27.35736316 +0000 UTC m=+164.326806032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.877704 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.877772 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.958680 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:26 crc kubenswrapper[4832]: E1002 18:23:26.958781 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:27.458765099 +0000 UTC m=+164.428207971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:26 crc kubenswrapper[4832]: I1002 18:23:26.959140 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:26 crc kubenswrapper[4832]: E1002 18:23:26.959436 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:27.45942661 +0000 UTC m=+164.428869482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.020394 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7z6z2" Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.061101 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.061469 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:27.561450729 +0000 UTC m=+164.530893601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.163050 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.163489 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:27.663467237 +0000 UTC m=+164.632910109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.213846 4832 patch_prober.go:28] interesting pod/router-default-5444994796-k5cpd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 18:23:27 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Oct 02 18:23:27 crc kubenswrapper[4832]: [+]process-running ok Oct 02 18:23:27 crc kubenswrapper[4832]: healthz check failed Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.213904 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5cpd" podUID="ee435d24-9de6-4a34-80e7-044ae5bc1bef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.234898 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.235365 4832 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-m4vv7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.235401 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" podUID="e1ccc88e-b013-4c52-92b1-6e6462492c3c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.263703 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.264086 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:27.764066961 +0000 UTC m=+164.733509833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.365619 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.370183 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:27.870168248 +0000 UTC m=+164.839611120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.467310 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.467503 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:27.967470919 +0000 UTC m=+164.936913791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.468283 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.468698 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:27.968676427 +0000 UTC m=+164.938119299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.569683 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.569904 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.06987373 +0000 UTC m=+165.039316612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.569992 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.570372 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.070363075 +0000 UTC m=+165.039805947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.670803 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.670949 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.170923149 +0000 UTC m=+165.140366021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.671103 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.671443 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.171428584 +0000 UTC m=+165.140871456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.772503 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.772699 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.272672438 +0000 UTC m=+165.242115310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.772969 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.773348 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.273336749 +0000 UTC m=+165.242779611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.874183 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.874381 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.374351956 +0000 UTC m=+165.343794828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.874676 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.875014 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.375007027 +0000 UTC m=+165.344449899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.975566 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:27 crc kubenswrapper[4832]: E1002 18:23:27.975901 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.47588525 +0000 UTC m=+165.445328122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:27 crc kubenswrapper[4832]: I1002 18:23:27.983787 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-86r7x"] Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.004389 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.009469 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.021168 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-86r7x"] Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.083562 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-utilities\") pod \"community-operators-86r7x\" (UID: \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\") " pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.083904 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brh4\" (UniqueName: \"kubernetes.io/projected/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-kube-api-access-4brh4\") pod \"community-operators-86r7x\" (UID: \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\") " pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.083948 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.083988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-catalog-content\") pod \"community-operators-86r7x\" (UID: \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\") " pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:23:28 crc kubenswrapper[4832]: E1002 18:23:28.084505 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.584491604 +0000 UTC m=+165.553934476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.179923 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jgq75"] Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.180818 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.185632 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:28 crc kubenswrapper[4832]: E1002 18:23:28.185800 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.68577108 +0000 UTC m=+165.655213952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.185983 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-catalog-content\") pod \"community-operators-86r7x\" (UID: \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\") " pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.186095 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-utilities\") pod \"community-operators-86r7x\" (UID: \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\") " pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.186149 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4brh4\" (UniqueName: \"kubernetes.io/projected/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-kube-api-access-4brh4\") pod \"community-operators-86r7x\" (UID: \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\") " pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.186198 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:28 crc kubenswrapper[4832]: E1002 18:23:28.186663 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.686645578 +0000 UTC m=+165.656088450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.187183 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-catalog-content\") pod \"community-operators-86r7x\" (UID: \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\") " pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.187320 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-utilities\") pod \"community-operators-86r7x\" (UID: \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\") " pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.189751 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.193056 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgq75"] Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.215538 4832 patch_prober.go:28] interesting pod/router-default-5444994796-k5cpd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 18:23:28 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Oct 02 18:23:28 crc kubenswrapper[4832]: [+]process-running ok Oct 02 18:23:28 crc kubenswrapper[4832]: healthz check failed Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.215607 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5cpd" podUID="ee435d24-9de6-4a34-80e7-044ae5bc1bef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.232927 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4brh4\" (UniqueName: \"kubernetes.io/projected/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-kube-api-access-4brh4\") pod \"community-operators-86r7x\" (UID: \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\") " pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.270441 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" event={"ID":"042f796e-c81a-4fd3-898c-ca596ed62bd5","Type":"ContainerStarted","Data":"dbe63f61f3994417b6c951f491798d757627bae16f17e0466654bbf392b5a035"} Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.270489 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" event={"ID":"042f796e-c81a-4fd3-898c-ca596ed62bd5","Type":"ContainerStarted","Data":"bda26de5452fa645458fb9418ad0b4fc2f355c5b7f55af7672e7f59da1e6c919"} Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.283564 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4k7vw" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.286771 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.287006 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbp67\" (UniqueName: \"kubernetes.io/projected/cd8c743f-e305-47fe-9858-0c0af2a86ea3-kube-api-access-gbp67\") pod \"certified-operators-jgq75\" (UID: \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\") " pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.287098 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8c743f-e305-47fe-9858-0c0af2a86ea3-utilities\") pod \"certified-operators-jgq75\" (UID: \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\") " pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.287122 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8c743f-e305-47fe-9858-0c0af2a86ea3-catalog-content\") pod \"certified-operators-jgq75\" (UID: \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\") " pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:23:28 crc kubenswrapper[4832]: E1002 18:23:28.287215 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.787199091 +0000 UTC m=+165.756641963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.321492 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.368662 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c88zl"] Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.369617 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.388508 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbp67\" (UniqueName: \"kubernetes.io/projected/cd8c743f-e305-47fe-9858-0c0af2a86ea3-kube-api-access-gbp67\") pod \"certified-operators-jgq75\" (UID: \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\") " pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.388564 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.388740 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8c743f-e305-47fe-9858-0c0af2a86ea3-utilities\") pod \"certified-operators-jgq75\" (UID: \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\") " pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.388771 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8c743f-e305-47fe-9858-0c0af2a86ea3-catalog-content\") pod \"certified-operators-jgq75\" (UID: \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\") " pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:23:28 crc kubenswrapper[4832]: E1002 18:23:28.390434 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.890423807 +0000 UTC m=+165.859866679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.391435 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8c743f-e305-47fe-9858-0c0af2a86ea3-utilities\") pod \"certified-operators-jgq75\" (UID: \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\") " pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.391625 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8c743f-e305-47fe-9858-0c0af2a86ea3-catalog-content\") pod \"certified-operators-jgq75\" (UID: \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\") " pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.394834 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c88zl"] Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.437097 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbp67\" (UniqueName: \"kubernetes.io/projected/cd8c743f-e305-47fe-9858-0c0af2a86ea3-kube-api-access-gbp67\") pod \"certified-operators-jgq75\" (UID: \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\") " pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.492742 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.493067 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-catalog-content\") pod \"community-operators-c88zl\" (UID: \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\") " pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.493107 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-utilities\") pod \"community-operators-c88zl\" (UID: \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\") " pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.493160 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh6c2\" (UniqueName: \"kubernetes.io/projected/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-kube-api-access-fh6c2\") pod \"community-operators-c88zl\" (UID: \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\") " pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:23:28 crc kubenswrapper[4832]: E1002 18:23:28.493373 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:28.993354124 +0000 UTC m=+165.962796996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.494756 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.581537 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nkcs9"] Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.582626 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.598871 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.598929 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-catalog-content\") pod \"community-operators-c88zl\" (UID: \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\") " pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.598953 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-utilities\") pod \"community-operators-c88zl\" (UID: \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\") " pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.599446 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-catalog-content\") pod \"community-operators-c88zl\" (UID: \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\") " pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.599749 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-utilities\") pod \"community-operators-c88zl\" (UID: \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\") " pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.600731 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh6c2\" (UniqueName: \"kubernetes.io/projected/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-kube-api-access-fh6c2\") pod \"community-operators-c88zl\" (UID: \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\") " pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:23:28 crc kubenswrapper[4832]: E1002 18:23:28.607588 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:29.107561074 +0000 UTC m=+166.077003946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.608850 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nkcs9"] Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.641234 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh6c2\" (UniqueName: \"kubernetes.io/projected/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-kube-api-access-fh6c2\") pod \"community-operators-c88zl\" (UID: \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\") " pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.702006 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.702385 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-catalog-content\") pod \"certified-operators-nkcs9\" (UID: \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\") " pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.702428 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-utilities\") pod \"certified-operators-nkcs9\" (UID: \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\") " pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.702456 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fblvk\" (UniqueName: \"kubernetes.io/projected/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-kube-api-access-fblvk\") pod \"certified-operators-nkcs9\" (UID: \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\") " pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:23:28 crc kubenswrapper[4832]: E1002 18:23:28.702544 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:29.202529832 +0000 UTC m=+166.171972704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.709718 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.779924 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-86r7x"] Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.809647 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.809701 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-catalog-content\") pod \"certified-operators-nkcs9\" (UID: \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\") " pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.809759 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-utilities\") pod \"certified-operators-nkcs9\" (UID: \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\") " pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.809791 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fblvk\" (UniqueName: \"kubernetes.io/projected/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-kube-api-access-fblvk\") pod \"certified-operators-nkcs9\" (UID: \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\") " pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:23:28 crc kubenswrapper[4832]: E1002 18:23:28.809948 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:29.309918918 +0000 UTC m=+166.279361780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.810560 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-catalog-content\") pod \"certified-operators-nkcs9\" (UID: \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\") " pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.810596 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-utilities\") pod \"certified-operators-nkcs9\" (UID: \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\") " pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.859111 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fblvk\" (UniqueName: \"kubernetes.io/projected/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-kube-api-access-fblvk\") pod \"certified-operators-nkcs9\" (UID: \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\") " pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.895331 4832 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.903130 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.911674 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:28 crc kubenswrapper[4832]: E1002 18:23:28.912039 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:29.41202299 +0000 UTC m=+166.381465852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:28 crc kubenswrapper[4832]: I1002 18:23:28.983539 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgq75"] Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.016009 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:29 crc kubenswrapper[4832]: E1002 18:23:29.016679 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:29.51664808 +0000 UTC m=+166.486090962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.099176 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c88zl"] Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.118161 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:29 crc kubenswrapper[4832]: E1002 18:23:29.121831 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:29.61869269 +0000 UTC m=+166.588135562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.209855 4832 patch_prober.go:28] interesting pod/router-default-5444994796-k5cpd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 18:23:29 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Oct 02 18:23:29 crc kubenswrapper[4832]: [+]process-running ok Oct 02 18:23:29 crc kubenswrapper[4832]: healthz check failed Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.209951 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5cpd" podUID="ee435d24-9de6-4a34-80e7-044ae5bc1bef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.220406 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:29 crc kubenswrapper[4832]: E1002 18:23:29.220967 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:29.720945195 +0000 UTC m=+166.690388067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.274551 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nkcs9"] Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.278610 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgq75" event={"ID":"cd8c743f-e305-47fe-9858-0c0af2a86ea3","Type":"ContainerStarted","Data":"c370165ce552b3130e05e3abe54b112d3f4ae4d3a0fe9920808c6abdef26cf86"} Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.281739 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" event={"ID":"042f796e-c81a-4fd3-898c-ca596ed62bd5","Type":"ContainerStarted","Data":"0ce5dc8f832818c245a8b90f08a7ebfa2398f4ff71069253294875e54aae9071"} Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.284064 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c88zl" event={"ID":"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a","Type":"ContainerStarted","Data":"a06e17f211169cece3857613494bbfbc42b778b1f2f790f59198e7ac4cdcfea3"} Oct 02 18:23:29 crc kubenswrapper[4832]: W1002 18:23:29.284426 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb31d21a6_5799_4aaa_ae96_b82dadcdc6c4.slice/crio-2d99d5d2b4f05f52efcec611536229db28492461e87fecc71c316721e611ba51 WatchSource:0}: Error finding container 2d99d5d2b4f05f52efcec611536229db28492461e87fecc71c316721e611ba51: Status 404 returned error can't find the container with id 2d99d5d2b4f05f52efcec611536229db28492461e87fecc71c316721e611ba51 Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.290085 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86r7x" event={"ID":"8f7511c3-f168-4aa4-ab7a-09e94e1ee900","Type":"ContainerStarted","Data":"8d3bae71e21fe0d9d133e81461c8246f6fa19325d46656b7090cb6f8f95306ce"} Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.290146 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86r7x" event={"ID":"8f7511c3-f168-4aa4-ab7a-09e94e1ee900","Type":"ContainerStarted","Data":"c9ad24a8f973dc5ba25c0d9433814072324c4f89342bf65921d2495df49620a1"} Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.310229 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tpvsc" podStartSLOduration=10.310210886 podStartE2EDuration="10.310210886s" podCreationTimestamp="2025-10-02 18:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:29.308934715 +0000 UTC m=+166.278377587" watchObservedRunningTime="2025-10-02 18:23:29.310210886 +0000 UTC m=+166.279653758" Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.321706 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:29 crc kubenswrapper[4832]: E1002 18:23:29.321891 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:29.82186207 +0000 UTC m=+166.791304942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.322081 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:29 crc kubenswrapper[4832]: E1002 18:23:29.322387 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:29.822379536 +0000 UTC m=+166.791822408 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.423386 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:29 crc kubenswrapper[4832]: E1002 18:23:29.423689 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:29.923646561 +0000 UTC m=+166.893089433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.424296 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:29 crc kubenswrapper[4832]: E1002 18:23:29.424886 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:29.924873319 +0000 UTC m=+166.894316401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.525012 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:29 crc kubenswrapper[4832]: E1002 18:23:29.525204 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-02 18:23:30.025177884 +0000 UTC m=+166.994620756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.525379 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:29 crc kubenswrapper[4832]: E1002 18:23:29.525690 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-02 18:23:30.02567913 +0000 UTC m=+166.995122002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8gdws" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.534020 4832 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-02T18:23:28.895371379Z","Handler":null,"Name":""} Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.538480 4832 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.538527 4832 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.627012 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.632855 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.729531 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.736364 4832 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.736454 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.779763 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8gdws\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:29 crc kubenswrapper[4832]: I1002 18:23:29.948006 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.160027 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8gdws"] Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.165323 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p42pd"] Oct 02 18:23:30 crc kubenswrapper[4832]: W1002 18:23:30.166464 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8d46891_e775_4f73_b366_544ba67c1adf.slice/crio-6be3947cc118ab0993d28bb519a3075b670799bdcfb271b2e7de286ab64def08 WatchSource:0}: Error finding container 6be3947cc118ab0993d28bb519a3075b670799bdcfb271b2e7de286ab64def08: Status 404 returned error can't find the container with id 6be3947cc118ab0993d28bb519a3075b670799bdcfb271b2e7de286ab64def08 Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.166829 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.169177 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.182367 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.183512 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.185164 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p42pd"] Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.188367 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.189194 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.189684 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.208308 4832 patch_prober.go:28] interesting pod/router-default-5444994796-k5cpd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 18:23:30 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Oct 02 18:23:30 crc kubenswrapper[4832]: [+]process-running ok Oct 02 18:23:30 crc kubenswrapper[4832]: healthz check failed Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.208377 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5cpd" podUID="ee435d24-9de6-4a34-80e7-044ae5bc1bef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.299111 4832 generic.go:334] "Generic (PLEG): container finished" podID="8f7511c3-f168-4aa4-ab7a-09e94e1ee900" containerID="8d3bae71e21fe0d9d133e81461c8246f6fa19325d46656b7090cb6f8f95306ce" exitCode=0 Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.299438 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86r7x" event={"ID":"8f7511c3-f168-4aa4-ab7a-09e94e1ee900","Type":"ContainerDied","Data":"8d3bae71e21fe0d9d133e81461c8246f6fa19325d46656b7090cb6f8f95306ce"} Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.301771 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.302233 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" event={"ID":"e8d46891-e775-4f73-b366-544ba67c1adf","Type":"ContainerStarted","Data":"6be3947cc118ab0993d28bb519a3075b670799bdcfb271b2e7de286ab64def08"} Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.304772 4832 generic.go:334] "Generic (PLEG): container finished" podID="b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" containerID="392b4ceeebe30b0b5b9310aa02bb09f5f6c9ce2dd679391e0a1778d9f3164c11" exitCode=0 Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.304866 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkcs9" event={"ID":"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4","Type":"ContainerDied","Data":"392b4ceeebe30b0b5b9310aa02bb09f5f6c9ce2dd679391e0a1778d9f3164c11"} Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.304910 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkcs9" event={"ID":"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4","Type":"ContainerStarted","Data":"2d99d5d2b4f05f52efcec611536229db28492461e87fecc71c316721e611ba51"} Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.307598 4832 generic.go:334] "Generic (PLEG): container finished" podID="cd8c743f-e305-47fe-9858-0c0af2a86ea3" containerID="7ef0ce2e090ab951e6582997f2316e66209d4e3c906686088833a9bc9ca03cb2" exitCode=0 Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.307655 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgq75" event={"ID":"cd8c743f-e305-47fe-9858-0c0af2a86ea3","Type":"ContainerDied","Data":"7ef0ce2e090ab951e6582997f2316e66209d4e3c906686088833a9bc9ca03cb2"} Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.312883 4832 generic.go:334] "Generic (PLEG): container finished" podID="e291aef6-bbde-41a5-9981-96b992547e03" containerID="64d284757ed2c12f01897739b3e186e8a14c4692a20fe726fd8d3c98fe203c77" exitCode=0 Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.312933 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" event={"ID":"e291aef6-bbde-41a5-9981-96b992547e03","Type":"ContainerDied","Data":"64d284757ed2c12f01897739b3e186e8a14c4692a20fe726fd8d3c98fe203c77"} Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.326018 4832 generic.go:334] "Generic (PLEG): container finished" podID="7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" containerID="5f44b4befa1d4ca1577e9167bbaf04ab084fd8433f621144f670397b0c9fe205" exitCode=0 Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.326083 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c88zl" event={"ID":"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a","Type":"ContainerDied","Data":"5f44b4befa1d4ca1577e9167bbaf04ab084fd8433f621144f670397b0c9fe205"} Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.337165 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz4vb\" (UniqueName: \"kubernetes.io/projected/d532f249-806b-4c9d-936e-7504d83f11ae-kube-api-access-tz4vb\") pod \"redhat-marketplace-p42pd\" (UID: \"d532f249-806b-4c9d-936e-7504d83f11ae\") " pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.337242 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.337298 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.337464 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d532f249-806b-4c9d-936e-7504d83f11ae-catalog-content\") pod \"redhat-marketplace-p42pd\" (UID: \"d532f249-806b-4c9d-936e-7504d83f11ae\") " pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.337536 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d532f249-806b-4c9d-936e-7504d83f11ae-utilities\") pod \"redhat-marketplace-p42pd\" (UID: \"d532f249-806b-4c9d-936e-7504d83f11ae\") " pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.438810 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d532f249-806b-4c9d-936e-7504d83f11ae-catalog-content\") pod \"redhat-marketplace-p42pd\" (UID: \"d532f249-806b-4c9d-936e-7504d83f11ae\") " pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.439080 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d532f249-806b-4c9d-936e-7504d83f11ae-utilities\") pod \"redhat-marketplace-p42pd\" (UID: \"d532f249-806b-4c9d-936e-7504d83f11ae\") " pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.439249 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz4vb\" (UniqueName: \"kubernetes.io/projected/d532f249-806b-4c9d-936e-7504d83f11ae-kube-api-access-tz4vb\") pod \"redhat-marketplace-p42pd\" (UID: \"d532f249-806b-4c9d-936e-7504d83f11ae\") " pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.439385 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.439523 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.440401 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.440611 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d532f249-806b-4c9d-936e-7504d83f11ae-utilities\") pod \"redhat-marketplace-p42pd\" (UID: \"d532f249-806b-4c9d-936e-7504d83f11ae\") " pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.440651 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d532f249-806b-4c9d-936e-7504d83f11ae-catalog-content\") pod \"redhat-marketplace-p42pd\" (UID: \"d532f249-806b-4c9d-936e-7504d83f11ae\") " pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.460615 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.461710 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz4vb\" (UniqueName: \"kubernetes.io/projected/d532f249-806b-4c9d-936e-7504d83f11ae-kube-api-access-tz4vb\") pod \"redhat-marketplace-p42pd\" (UID: \"d532f249-806b-4c9d-936e-7504d83f11ae\") " pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.507114 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.531823 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.577010 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2l5x4"] Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.578311 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.589627 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2l5x4"] Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.745563 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jmmc\" (UniqueName: \"kubernetes.io/projected/359ecf9f-50f0-4941-b8d6-4b3330187bf4-kube-api-access-5jmmc\") pod \"redhat-marketplace-2l5x4\" (UID: \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\") " pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.745622 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359ecf9f-50f0-4941-b8d6-4b3330187bf4-catalog-content\") pod \"redhat-marketplace-2l5x4\" (UID: \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\") " pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.745661 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359ecf9f-50f0-4941-b8d6-4b3330187bf4-utilities\") pod \"redhat-marketplace-2l5x4\" (UID: \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\") " pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.847114 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jmmc\" (UniqueName: \"kubernetes.io/projected/359ecf9f-50f0-4941-b8d6-4b3330187bf4-kube-api-access-5jmmc\") pod \"redhat-marketplace-2l5x4\" (UID: \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\") " pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.847157 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359ecf9f-50f0-4941-b8d6-4b3330187bf4-catalog-content\") pod \"redhat-marketplace-2l5x4\" (UID: \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\") " pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.847195 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359ecf9f-50f0-4941-b8d6-4b3330187bf4-utilities\") pod \"redhat-marketplace-2l5x4\" (UID: \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\") " pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.847787 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359ecf9f-50f0-4941-b8d6-4b3330187bf4-utilities\") pod \"redhat-marketplace-2l5x4\" (UID: \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\") " pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.848015 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359ecf9f-50f0-4941-b8d6-4b3330187bf4-catalog-content\") pod \"redhat-marketplace-2l5x4\" (UID: \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\") " pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.864725 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jmmc\" (UniqueName: \"kubernetes.io/projected/359ecf9f-50f0-4941-b8d6-4b3330187bf4-kube-api-access-5jmmc\") pod \"redhat-marketplace-2l5x4\" (UID: \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\") " pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.939684 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.939758 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.942142 4832 patch_prober.go:28] interesting pod/console-f9d7485db-7ffgv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.942223 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-7ffgv" podUID="b562a645-10d9-44f7-a4fe-d3bf63ac9185" containerName="console" probeResult="failure" output="Get \"https://10.217.0.21:8443/health\": dial tcp 10.217.0.21:8443: connect: connection refused" Oct 02 18:23:30 crc kubenswrapper[4832]: I1002 18:23:30.944360 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.018119 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p42pd"] Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.024959 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-nk8bt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.024999 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nk8bt" podUID="c501c3e2-851d-452a-9fd1-0cdb21ac15e6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.028691 4832 patch_prober.go:28] interesting pod/downloads-7954f5f757-nk8bt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.028760 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-nk8bt" podUID="c501c3e2-851d-452a-9fd1-0cdb21ac15e6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Oct 02 18:23:31 crc kubenswrapper[4832]: W1002 18:23:31.048627 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd532f249_806b_4c9d_936e_7504d83f11ae.slice/crio-69c22821cac87eb61390fd7a54620ac6ce2266ef7d87cd6a5eb1f0f3930dd4af WatchSource:0}: Error finding container 69c22821cac87eb61390fd7a54620ac6ce2266ef7d87cd6a5eb1f0f3930dd4af: Status 404 returned error can't find the container with id 69c22821cac87eb61390fd7a54620ac6ce2266ef7d87cd6a5eb1f0f3930dd4af Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.082351 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.169807 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.191996 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fcwtc"] Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.195250 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.210019 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.212043 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fcwtc"] Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.218447 4832 patch_prober.go:28] interesting pod/router-default-5444994796-k5cpd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 18:23:31 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Oct 02 18:23:31 crc kubenswrapper[4832]: [+]process-running ok Oct 02 18:23:31 crc kubenswrapper[4832]: healthz check failed Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.218520 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5cpd" podUID="ee435d24-9de6-4a34-80e7-044ae5bc1bef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.219654 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2l5x4"] Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.237135 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.347422 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd","Type":"ContainerStarted","Data":"3cfd2806a3500f4986c1e4630b499dbb04ebe8875af14624db938ae4854ec06e"} Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.350038 4832 generic.go:334] "Generic (PLEG): container finished" podID="d532f249-806b-4c9d-936e-7504d83f11ae" containerID="762a47849370e950bdd93a68887548c8e7ad619e307f73f5146ba0f372859a88" exitCode=0 Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.350139 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p42pd" event={"ID":"d532f249-806b-4c9d-936e-7504d83f11ae","Type":"ContainerDied","Data":"762a47849370e950bdd93a68887548c8e7ad619e307f73f5146ba0f372859a88"} Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.350180 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p42pd" event={"ID":"d532f249-806b-4c9d-936e-7504d83f11ae","Type":"ContainerStarted","Data":"69c22821cac87eb61390fd7a54620ac6ce2266ef7d87cd6a5eb1f0f3930dd4af"} Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.355874 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" event={"ID":"e8d46891-e775-4f73-b366-544ba67c1adf","Type":"ContainerStarted","Data":"26de5668cd3d1fbfcfd2aae5481a713a98822ad3f648fb1a64bb77e6e6b27a03"} Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.356682 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.360679 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-catalog-content\") pod \"redhat-operators-fcwtc\" (UID: \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\") " pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.360727 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcdqc\" (UniqueName: \"kubernetes.io/projected/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-kube-api-access-bcdqc\") pod \"redhat-operators-fcwtc\" (UID: \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\") " pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.360789 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-utilities\") pod \"redhat-operators-fcwtc\" (UID: \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\") " pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.363684 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2l5x4" event={"ID":"359ecf9f-50f0-4941-b8d6-4b3330187bf4","Type":"ContainerStarted","Data":"42985cf5a96e315749e04e00ca3f49362badc3a5f4d4102a3629d22138aa88b2"} Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.407713 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" podStartSLOduration=138.407689544 podStartE2EDuration="2m18.407689544s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:31.403829263 +0000 UTC m=+168.373272135" watchObservedRunningTime="2025-10-02 18:23:31.407689544 +0000 UTC m=+168.377132426" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.462309 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcdqc\" (UniqueName: \"kubernetes.io/projected/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-kube-api-access-bcdqc\") pod \"redhat-operators-fcwtc\" (UID: \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\") " pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.462440 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-utilities\") pod \"redhat-operators-fcwtc\" (UID: \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\") " pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.463215 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-utilities\") pod \"redhat-operators-fcwtc\" (UID: \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\") " pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.464408 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-catalog-content\") pod \"redhat-operators-fcwtc\" (UID: \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\") " pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.464941 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-catalog-content\") pod \"redhat-operators-fcwtc\" (UID: \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\") " pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.500988 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcdqc\" (UniqueName: \"kubernetes.io/projected/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-kube-api-access-bcdqc\") pod \"redhat-operators-fcwtc\" (UID: \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\") " pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.525151 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.529860 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.539417 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-ljdjq" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.578082 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-44j5w"] Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.591098 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.622470 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-44j5w"] Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.677205 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-catalog-content\") pod \"redhat-operators-44j5w\" (UID: \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\") " pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.677308 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-utilities\") pod \"redhat-operators-44j5w\" (UID: \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\") " pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.677452 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckj22\" (UniqueName: \"kubernetes.io/projected/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-kube-api-access-ckj22\") pod \"redhat-operators-44j5w\" (UID: \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\") " pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.768015 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.778726 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-catalog-content\") pod \"redhat-operators-44j5w\" (UID: \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\") " pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.779155 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-catalog-content\") pod \"redhat-operators-44j5w\" (UID: \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\") " pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.779331 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-utilities\") pod \"redhat-operators-44j5w\" (UID: \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\") " pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.780135 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-utilities\") pod \"redhat-operators-44j5w\" (UID: \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\") " pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.780701 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckj22\" (UniqueName: \"kubernetes.io/projected/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-kube-api-access-ckj22\") pod \"redhat-operators-44j5w\" (UID: \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\") " pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.799412 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckj22\" (UniqueName: \"kubernetes.io/projected/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-kube-api-access-ckj22\") pod \"redhat-operators-44j5w\" (UID: \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\") " pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.881684 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e291aef6-bbde-41a5-9981-96b992547e03-config-volume\") pod \"e291aef6-bbde-41a5-9981-96b992547e03\" (UID: \"e291aef6-bbde-41a5-9981-96b992547e03\") " Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.881816 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krmn2\" (UniqueName: \"kubernetes.io/projected/e291aef6-bbde-41a5-9981-96b992547e03-kube-api-access-krmn2\") pod \"e291aef6-bbde-41a5-9981-96b992547e03\" (UID: \"e291aef6-bbde-41a5-9981-96b992547e03\") " Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.881893 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e291aef6-bbde-41a5-9981-96b992547e03-secret-volume\") pod \"e291aef6-bbde-41a5-9981-96b992547e03\" (UID: \"e291aef6-bbde-41a5-9981-96b992547e03\") " Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.884314 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e291aef6-bbde-41a5-9981-96b992547e03-config-volume" (OuterVolumeSpecName: "config-volume") pod "e291aef6-bbde-41a5-9981-96b992547e03" (UID: "e291aef6-bbde-41a5-9981-96b992547e03"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.886276 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e291aef6-bbde-41a5-9981-96b992547e03-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e291aef6-bbde-41a5-9981-96b992547e03" (UID: "e291aef6-bbde-41a5-9981-96b992547e03"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.887081 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e291aef6-bbde-41a5-9981-96b992547e03-kube-api-access-krmn2" (OuterVolumeSpecName: "kube-api-access-krmn2") pod "e291aef6-bbde-41a5-9981-96b992547e03" (UID: "e291aef6-bbde-41a5-9981-96b992547e03"). InnerVolumeSpecName "kube-api-access-krmn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.931340 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fcwtc"] Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.968152 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.984100 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e291aef6-bbde-41a5-9981-96b992547e03-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.984148 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krmn2\" (UniqueName: \"kubernetes.io/projected/e291aef6-bbde-41a5-9981-96b992547e03-kube-api-access-krmn2\") on node \"crc\" DevicePath \"\"" Oct 02 18:23:31 crc kubenswrapper[4832]: I1002 18:23:31.984164 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e291aef6-bbde-41a5-9981-96b992547e03-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 02 18:23:32 crc kubenswrapper[4832]: W1002 18:23:32.005343 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45cecbc0_ddb6_4cc1_b2d8_892f598086a5.slice/crio-c829ac7fd0cb07ecd3bb8d5b5d70aca84e7c8081d5f23b57d8f386c3c3f1f1db WatchSource:0}: Error finding container c829ac7fd0cb07ecd3bb8d5b5d70aca84e7c8081d5f23b57d8f386c3c3f1f1db: Status 404 returned error can't find the container with id c829ac7fd0cb07ecd3bb8d5b5d70aca84e7c8081d5f23b57d8f386c3c3f1f1db Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.050877 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 02 18:23:32 crc kubenswrapper[4832]: E1002 18:23:32.051956 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e291aef6-bbde-41a5-9981-96b992547e03" containerName="collect-profiles" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.051982 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e291aef6-bbde-41a5-9981-96b992547e03" containerName="collect-profiles" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.052129 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e291aef6-bbde-41a5-9981-96b992547e03" containerName="collect-profiles" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.052552 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.056447 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.063761 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.064367 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.076031 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.186633 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07af2170-0444-4cb4-9ec0-3fd7b7265d99-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"07af2170-0444-4cb4-9ec0-3fd7b7265d99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.186686 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07af2170-0444-4cb4-9ec0-3fd7b7265d99-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"07af2170-0444-4cb4-9ec0-3fd7b7265d99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.206360 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.209430 4832 patch_prober.go:28] interesting pod/router-default-5444994796-k5cpd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 18:23:32 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Oct 02 18:23:32 crc kubenswrapper[4832]: [+]process-running ok Oct 02 18:23:32 crc kubenswrapper[4832]: healthz check failed Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.209468 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5cpd" podUID="ee435d24-9de6-4a34-80e7-044ae5bc1bef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.280296 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-44j5w"] Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.290679 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07af2170-0444-4cb4-9ec0-3fd7b7265d99-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"07af2170-0444-4cb4-9ec0-3fd7b7265d99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.290891 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07af2170-0444-4cb4-9ec0-3fd7b7265d99-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"07af2170-0444-4cb4-9ec0-3fd7b7265d99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.290971 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07af2170-0444-4cb4-9ec0-3fd7b7265d99-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"07af2170-0444-4cb4-9ec0-3fd7b7265d99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.309821 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07af2170-0444-4cb4-9ec0-3fd7b7265d99-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"07af2170-0444-4cb4-9ec0-3fd7b7265d99\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.371286 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44j5w" event={"ID":"a6d7c663-a450-49ec-a95a-d38d8df2b1cc","Type":"ContainerStarted","Data":"9aa5c4c38dbe1b8c9d8ac63d926410ed5eda4a0f1430fde32a9cf9a9d23133d9"} Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.375337 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcwtc" event={"ID":"45cecbc0-ddb6-4cc1-b2d8-892f598086a5","Type":"ContainerStarted","Data":"cb4e6be060b45aa695ba9701ca8b5b58a0f169cf20a32f8e3828749d70a54b55"} Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.375364 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcwtc" event={"ID":"45cecbc0-ddb6-4cc1-b2d8-892f598086a5","Type":"ContainerStarted","Data":"c829ac7fd0cb07ecd3bb8d5b5d70aca84e7c8081d5f23b57d8f386c3c3f1f1db"} Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.378416 4832 generic.go:334] "Generic (PLEG): container finished" podID="359ecf9f-50f0-4941-b8d6-4b3330187bf4" containerID="8eddc9743a8b8508ec3f57c1c84cdabc55e8e2864f338d56a68874d37f8aa268" exitCode=0 Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.378498 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2l5x4" event={"ID":"359ecf9f-50f0-4941-b8d6-4b3330187bf4","Type":"ContainerDied","Data":"8eddc9743a8b8508ec3f57c1c84cdabc55e8e2864f338d56a68874d37f8aa268"} Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.385106 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" event={"ID":"e291aef6-bbde-41a5-9981-96b992547e03","Type":"ContainerDied","Data":"8ee57580f6730db125a9e82bfee279a69946b9d74a7cb98a8e8795e3e369973c"} Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.385138 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ee57580f6730db125a9e82bfee279a69946b9d74a7cb98a8e8795e3e369973c" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.385155 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.388747 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.390301 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd","Type":"ContainerStarted","Data":"155092c700984287ecc93657999bc360096ee92a3b8b2c5553522205cecf0969"} Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.486657 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.486635037 podStartE2EDuration="2.486635037s" podCreationTimestamp="2025-10-02 18:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:32.477634926 +0000 UTC m=+169.447077798" watchObservedRunningTime="2025-10-02 18:23:32.486635037 +0000 UTC m=+169.456077909" Oct 02 18:23:32 crc kubenswrapper[4832]: I1002 18:23:32.733151 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 02 18:23:33 crc kubenswrapper[4832]: I1002 18:23:33.209092 4832 patch_prober.go:28] interesting pod/router-default-5444994796-k5cpd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 18:23:33 crc kubenswrapper[4832]: [-]has-synced failed: reason withheld Oct 02 18:23:33 crc kubenswrapper[4832]: [+]process-running ok Oct 02 18:23:33 crc kubenswrapper[4832]: healthz check failed Oct 02 18:23:33 crc kubenswrapper[4832]: I1002 18:23:33.209494 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5cpd" podUID="ee435d24-9de6-4a34-80e7-044ae5bc1bef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 18:23:33 crc kubenswrapper[4832]: I1002 18:23:33.409359 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"07af2170-0444-4cb4-9ec0-3fd7b7265d99","Type":"ContainerStarted","Data":"118d45b19726fbc1ebaf5e9a870cbc7b130212ea30cfe26158ae79e93bf23d15"} Oct 02 18:23:33 crc kubenswrapper[4832]: I1002 18:23:33.412021 4832 generic.go:334] "Generic (PLEG): container finished" podID="57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd" containerID="155092c700984287ecc93657999bc360096ee92a3b8b2c5553522205cecf0969" exitCode=0 Oct 02 18:23:33 crc kubenswrapper[4832]: I1002 18:23:33.412068 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd","Type":"ContainerDied","Data":"155092c700984287ecc93657999bc360096ee92a3b8b2c5553522205cecf0969"} Oct 02 18:23:33 crc kubenswrapper[4832]: I1002 18:23:33.415439 4832 generic.go:334] "Generic (PLEG): container finished" podID="a6d7c663-a450-49ec-a95a-d38d8df2b1cc" containerID="99a5db85419a75725be5b991907b702903be3ca5dc2db5036de9e54640706aa2" exitCode=0 Oct 02 18:23:33 crc kubenswrapper[4832]: I1002 18:23:33.415490 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44j5w" event={"ID":"a6d7c663-a450-49ec-a95a-d38d8df2b1cc","Type":"ContainerDied","Data":"99a5db85419a75725be5b991907b702903be3ca5dc2db5036de9e54640706aa2"} Oct 02 18:23:33 crc kubenswrapper[4832]: I1002 18:23:33.418605 4832 generic.go:334] "Generic (PLEG): container finished" podID="45cecbc0-ddb6-4cc1-b2d8-892f598086a5" containerID="cb4e6be060b45aa695ba9701ca8b5b58a0f169cf20a32f8e3828749d70a54b55" exitCode=0 Oct 02 18:23:33 crc kubenswrapper[4832]: I1002 18:23:33.418712 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcwtc" event={"ID":"45cecbc0-ddb6-4cc1-b2d8-892f598086a5","Type":"ContainerDied","Data":"cb4e6be060b45aa695ba9701ca8b5b58a0f169cf20a32f8e3828749d70a54b55"} Oct 02 18:23:33 crc kubenswrapper[4832]: I1002 18:23:33.470536 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-fjcjg" Oct 02 18:23:34 crc kubenswrapper[4832]: I1002 18:23:34.209424 4832 patch_prober.go:28] interesting pod/router-default-5444994796-k5cpd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 02 18:23:34 crc kubenswrapper[4832]: [+]has-synced ok Oct 02 18:23:34 crc kubenswrapper[4832]: [+]process-running ok Oct 02 18:23:34 crc kubenswrapper[4832]: healthz check failed Oct 02 18:23:34 crc kubenswrapper[4832]: I1002 18:23:34.209537 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k5cpd" podUID="ee435d24-9de6-4a34-80e7-044ae5bc1bef" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 02 18:23:34 crc kubenswrapper[4832]: I1002 18:23:34.441393 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"07af2170-0444-4cb4-9ec0-3fd7b7265d99","Type":"ContainerStarted","Data":"7ff397cdd114f83139925fea26f8ffcdb172f065a72629c99ab66942d2926297"} Oct 02 18:23:34 crc kubenswrapper[4832]: I1002 18:23:34.492712 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.492687788 podStartE2EDuration="2.492687788s" podCreationTimestamp="2025-10-02 18:23:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:23:34.487920429 +0000 UTC m=+171.457363301" watchObservedRunningTime="2025-10-02 18:23:34.492687788 +0000 UTC m=+171.462130660" Oct 02 18:23:34 crc kubenswrapper[4832]: I1002 18:23:34.695186 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 18:23:34 crc kubenswrapper[4832]: I1002 18:23:34.840293 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd-kubelet-dir\") pod \"57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd\" (UID: \"57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd\") " Oct 02 18:23:34 crc kubenswrapper[4832]: I1002 18:23:34.840482 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd-kube-api-access\") pod \"57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd\" (UID: \"57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd\") " Oct 02 18:23:34 crc kubenswrapper[4832]: I1002 18:23:34.840473 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd" (UID: "57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:23:34 crc kubenswrapper[4832]: I1002 18:23:34.840901 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 02 18:23:34 crc kubenswrapper[4832]: I1002 18:23:34.847746 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd" (UID: "57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:23:34 crc kubenswrapper[4832]: I1002 18:23:34.942953 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 02 18:23:35 crc kubenswrapper[4832]: I1002 18:23:35.207711 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:35 crc kubenswrapper[4832]: I1002 18:23:35.210554 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-k5cpd" Oct 02 18:23:35 crc kubenswrapper[4832]: I1002 18:23:35.466596 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd","Type":"ContainerDied","Data":"3cfd2806a3500f4986c1e4630b499dbb04ebe8875af14624db938ae4854ec06e"} Oct 02 18:23:35 crc kubenswrapper[4832]: I1002 18:23:35.466897 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cfd2806a3500f4986c1e4630b499dbb04ebe8875af14624db938ae4854ec06e" Oct 02 18:23:35 crc kubenswrapper[4832]: I1002 18:23:35.466637 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 02 18:23:35 crc kubenswrapper[4832]: I1002 18:23:35.473180 4832 generic.go:334] "Generic (PLEG): container finished" podID="07af2170-0444-4cb4-9ec0-3fd7b7265d99" containerID="7ff397cdd114f83139925fea26f8ffcdb172f065a72629c99ab66942d2926297" exitCode=0 Oct 02 18:23:35 crc kubenswrapper[4832]: I1002 18:23:35.473868 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"07af2170-0444-4cb4-9ec0-3fd7b7265d99","Type":"ContainerDied","Data":"7ff397cdd114f83139925fea26f8ffcdb172f065a72629c99ab66942d2926297"} Oct 02 18:23:35 crc kubenswrapper[4832]: I1002 18:23:35.801322 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs\") pod \"network-metrics-daemon-m27c2\" (UID: \"8adcf2d1-6a80-40e8-a94b-627c2b18443f\") " pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:23:35 crc kubenswrapper[4832]: I1002 18:23:35.810575 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8adcf2d1-6a80-40e8-a94b-627c2b18443f-metrics-certs\") pod \"network-metrics-daemon-m27c2\" (UID: \"8adcf2d1-6a80-40e8-a94b-627c2b18443f\") " pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:23:35 crc kubenswrapper[4832]: I1002 18:23:35.849516 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-m27c2" Oct 02 18:23:39 crc kubenswrapper[4832]: I1002 18:23:39.812722 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 18:23:39 crc kubenswrapper[4832]: I1002 18:23:39.868821 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07af2170-0444-4cb4-9ec0-3fd7b7265d99-kube-api-access\") pod \"07af2170-0444-4cb4-9ec0-3fd7b7265d99\" (UID: \"07af2170-0444-4cb4-9ec0-3fd7b7265d99\") " Oct 02 18:23:39 crc kubenswrapper[4832]: I1002 18:23:39.868859 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07af2170-0444-4cb4-9ec0-3fd7b7265d99-kubelet-dir\") pod \"07af2170-0444-4cb4-9ec0-3fd7b7265d99\" (UID: \"07af2170-0444-4cb4-9ec0-3fd7b7265d99\") " Oct 02 18:23:39 crc kubenswrapper[4832]: I1002 18:23:39.869085 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07af2170-0444-4cb4-9ec0-3fd7b7265d99-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "07af2170-0444-4cb4-9ec0-3fd7b7265d99" (UID: "07af2170-0444-4cb4-9ec0-3fd7b7265d99"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:23:39 crc kubenswrapper[4832]: I1002 18:23:39.877434 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07af2170-0444-4cb4-9ec0-3fd7b7265d99-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "07af2170-0444-4cb4-9ec0-3fd7b7265d99" (UID: "07af2170-0444-4cb4-9ec0-3fd7b7265d99"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:23:39 crc kubenswrapper[4832]: I1002 18:23:39.969697 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07af2170-0444-4cb4-9ec0-3fd7b7265d99-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 02 18:23:39 crc kubenswrapper[4832]: I1002 18:23:39.969738 4832 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07af2170-0444-4cb4-9ec0-3fd7b7265d99-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 02 18:23:40 crc kubenswrapper[4832]: I1002 18:23:40.541975 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"07af2170-0444-4cb4-9ec0-3fd7b7265d99","Type":"ContainerDied","Data":"118d45b19726fbc1ebaf5e9a870cbc7b130212ea30cfe26158ae79e93bf23d15"} Oct 02 18:23:40 crc kubenswrapper[4832]: I1002 18:23:40.542316 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="118d45b19726fbc1ebaf5e9a870cbc7b130212ea30cfe26158ae79e93bf23d15" Oct 02 18:23:40 crc kubenswrapper[4832]: I1002 18:23:40.542019 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 02 18:23:40 crc kubenswrapper[4832]: I1002 18:23:40.949953 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:40 crc kubenswrapper[4832]: I1002 18:23:40.954426 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:23:41 crc kubenswrapper[4832]: I1002 18:23:41.040658 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-nk8bt" Oct 02 18:23:47 crc kubenswrapper[4832]: I1002 18:23:47.493498 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 02 18:23:49 crc kubenswrapper[4832]: I1002 18:23:49.956009 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:23:56 crc kubenswrapper[4832]: I1002 18:23:56.875622 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:23:56 crc kubenswrapper[4832]: I1002 18:23:56.876676 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:24:01 crc kubenswrapper[4832]: I1002 18:24:01.991333 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncb2p" Oct 02 18:24:05 crc kubenswrapper[4832]: E1002 18:24:05.130071 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:fc6d1468707e4bcc767e25ba90e295828fee37cd04f9ceaa879288e8fb4d2d84: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:fc6d1468707e4bcc767e25ba90e295828fee37cd04f9ceaa879288e8fb4d2d84\": context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 02 18:24:05 crc kubenswrapper[4832]: E1002 18:24:05.130866 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckj22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-44j5w_openshift-marketplace(a6d7c663-a450-49ec-a95a-d38d8df2b1cc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:fc6d1468707e4bcc767e25ba90e295828fee37cd04f9ceaa879288e8fb4d2d84: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:fc6d1468707e4bcc767e25ba90e295828fee37cd04f9ceaa879288e8fb4d2d84\": context canceled" logger="UnhandledError" Oct 02 18:24:05 crc kubenswrapper[4832]: E1002 18:24:05.132088 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:fc6d1468707e4bcc767e25ba90e295828fee37cd04f9ceaa879288e8fb4d2d84: Get \\\"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:fc6d1468707e4bcc767e25ba90e295828fee37cd04f9ceaa879288e8fb4d2d84\\\": context canceled\"" pod="openshift-marketplace/redhat-operators-44j5w" podUID="a6d7c663-a450-49ec-a95a-d38d8df2b1cc" Oct 02 18:24:05 crc kubenswrapper[4832]: E1002 18:24:05.987598 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 02 18:24:05 crc kubenswrapper[4832]: E1002 18:24:05.988093 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tz4vb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-p42pd_openshift-marketplace(d532f249-806b-4c9d-936e-7504d83f11ae): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 02 18:24:05 crc kubenswrapper[4832]: E1002 18:24:05.989270 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-p42pd" podUID="d532f249-806b-4c9d-936e-7504d83f11ae" Oct 02 18:24:07 crc kubenswrapper[4832]: E1002 18:24:07.133310 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p42pd" podUID="d532f249-806b-4c9d-936e-7504d83f11ae" Oct 02 18:24:07 crc kubenswrapper[4832]: E1002 18:24:07.194840 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 02 18:24:07 crc kubenswrapper[4832]: E1002 18:24:07.195024 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-86r7x_openshift-marketplace(8f7511c3-f168-4aa4-ab7a-09e94e1ee900): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 02 18:24:07 crc kubenswrapper[4832]: E1002 18:24:07.196295 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-86r7x" podUID="8f7511c3-f168-4aa4-ab7a-09e94e1ee900" Oct 02 18:24:08 crc kubenswrapper[4832]: E1002 18:24:08.345875 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-86r7x" podUID="8f7511c3-f168-4aa4-ab7a-09e94e1ee900" Oct 02 18:24:08 crc kubenswrapper[4832]: E1002 18:24:08.456933 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 02 18:24:08 crc kubenswrapper[4832]: E1002 18:24:08.457556 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fblvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-nkcs9_openshift-marketplace(b31d21a6-5799-4aaa-ae96-b82dadcdc6c4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 02 18:24:08 crc kubenswrapper[4832]: E1002 18:24:08.459024 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-nkcs9" podUID="b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" Oct 02 18:24:08 crc kubenswrapper[4832]: E1002 18:24:08.467778 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 02 18:24:08 crc kubenswrapper[4832]: E1002 18:24:08.467873 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gbp67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jgq75_openshift-marketplace(cd8c743f-e305-47fe-9858-0c0af2a86ea3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 02 18:24:08 crc kubenswrapper[4832]: E1002 18:24:08.469113 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-jgq75" podUID="cd8c743f-e305-47fe-9858-0c0af2a86ea3" Oct 02 18:24:08 crc kubenswrapper[4832]: E1002 18:24:08.501786 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 02 18:24:08 crc kubenswrapper[4832]: E1002 18:24:08.501935 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fh6c2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c88zl_openshift-marketplace(7ff7bbb4-a78b-4f19-a838-735f3afd9e4a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 02 18:24:08 crc kubenswrapper[4832]: E1002 18:24:08.503111 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-c88zl" podUID="7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" Oct 02 18:24:08 crc kubenswrapper[4832]: I1002 18:24:08.779667 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-m27c2"] Oct 02 18:24:11 crc kubenswrapper[4832]: E1002 18:24:11.711477 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-nkcs9" podUID="b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" Oct 02 18:24:11 crc kubenswrapper[4832]: E1002 18:24:11.711587 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c88zl" podUID="7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" Oct 02 18:24:11 crc kubenswrapper[4832]: E1002 18:24:11.711757 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jgq75" podUID="cd8c743f-e305-47fe-9858-0c0af2a86ea3" Oct 02 18:24:12 crc kubenswrapper[4832]: W1002 18:24:12.358185 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8adcf2d1_6a80_40e8_a94b_627c2b18443f.slice/crio-400d7d8c29a977b31f162080a149c43831661508e724690940c320e19e1998a1 WatchSource:0}: Error finding container 400d7d8c29a977b31f162080a149c43831661508e724690940c320e19e1998a1: Status 404 returned error can't find the container with id 400d7d8c29a977b31f162080a149c43831661508e724690940c320e19e1998a1 Oct 02 18:24:12 crc kubenswrapper[4832]: I1002 18:24:12.757824 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-m27c2" event={"ID":"8adcf2d1-6a80-40e8-a94b-627c2b18443f","Type":"ContainerStarted","Data":"400d7d8c29a977b31f162080a149c43831661508e724690940c320e19e1998a1"} Oct 02 18:24:13 crc kubenswrapper[4832]: I1002 18:24:13.769480 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-m27c2" event={"ID":"8adcf2d1-6a80-40e8-a94b-627c2b18443f","Type":"ContainerStarted","Data":"71daf4c93a270f0cd1aec47b14c9187c02e8729fbbdfe74f56612eac66276e03"} Oct 02 18:24:13 crc kubenswrapper[4832]: I1002 18:24:13.771259 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-m27c2" event={"ID":"8adcf2d1-6a80-40e8-a94b-627c2b18443f","Type":"ContainerStarted","Data":"2a6e9ac5723270d15a128131c64f918b037c2320ad02a7739b1ecb15252415d2"} Oct 02 18:24:13 crc kubenswrapper[4832]: I1002 18:24:13.772633 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcwtc" event={"ID":"45cecbc0-ddb6-4cc1-b2d8-892f598086a5","Type":"ContainerStarted","Data":"62f2e62e9045bb8936e3f2f4e18a117831f3a3a32ba65beaeccde7e30b6861e2"} Oct 02 18:24:13 crc kubenswrapper[4832]: I1002 18:24:13.778604 4832 generic.go:334] "Generic (PLEG): container finished" podID="359ecf9f-50f0-4941-b8d6-4b3330187bf4" containerID="00444445623fc0bf9b89b1bbdd2447d652206c9883a0cd79371f85c7420b2a39" exitCode=0 Oct 02 18:24:13 crc kubenswrapper[4832]: I1002 18:24:13.778695 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2l5x4" event={"ID":"359ecf9f-50f0-4941-b8d6-4b3330187bf4","Type":"ContainerDied","Data":"00444445623fc0bf9b89b1bbdd2447d652206c9883a0cd79371f85c7420b2a39"} Oct 02 18:24:13 crc kubenswrapper[4832]: I1002 18:24:13.808176 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-m27c2" podStartSLOduration=180.808154151 podStartE2EDuration="3m0.808154151s" podCreationTimestamp="2025-10-02 18:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:24:13.802067441 +0000 UTC m=+210.771510353" watchObservedRunningTime="2025-10-02 18:24:13.808154151 +0000 UTC m=+210.777597063" Oct 02 18:24:14 crc kubenswrapper[4832]: I1002 18:24:14.791143 4832 generic.go:334] "Generic (PLEG): container finished" podID="45cecbc0-ddb6-4cc1-b2d8-892f598086a5" containerID="62f2e62e9045bb8936e3f2f4e18a117831f3a3a32ba65beaeccde7e30b6861e2" exitCode=0 Oct 02 18:24:14 crc kubenswrapper[4832]: I1002 18:24:14.791490 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcwtc" event={"ID":"45cecbc0-ddb6-4cc1-b2d8-892f598086a5","Type":"ContainerDied","Data":"62f2e62e9045bb8936e3f2f4e18a117831f3a3a32ba65beaeccde7e30b6861e2"} Oct 02 18:24:15 crc kubenswrapper[4832]: I1002 18:24:15.802380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcwtc" event={"ID":"45cecbc0-ddb6-4cc1-b2d8-892f598086a5","Type":"ContainerStarted","Data":"0f8f2e1d606e1fb8db362c242ca6e03a300054c9daf88de9f0b591762814aeb5"} Oct 02 18:24:15 crc kubenswrapper[4832]: I1002 18:24:15.806222 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2l5x4" event={"ID":"359ecf9f-50f0-4941-b8d6-4b3330187bf4","Type":"ContainerStarted","Data":"8e91248c49fdf7d590057b7a54ff702ce26d788c50fc99ed705a62dca1e22ec0"} Oct 02 18:24:15 crc kubenswrapper[4832]: I1002 18:24:15.829320 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fcwtc" podStartSLOduration=3.038341088 podStartE2EDuration="44.829255683s" podCreationTimestamp="2025-10-02 18:23:31 +0000 UTC" firstStartedPulling="2025-10-02 18:23:33.422935912 +0000 UTC m=+170.392378784" lastFinishedPulling="2025-10-02 18:24:15.213850517 +0000 UTC m=+212.183293379" observedRunningTime="2025-10-02 18:24:15.825584417 +0000 UTC m=+212.795027299" watchObservedRunningTime="2025-10-02 18:24:15.829255683 +0000 UTC m=+212.798698585" Oct 02 18:24:15 crc kubenswrapper[4832]: I1002 18:24:15.852053 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2l5x4" podStartSLOduration=3.3453303549999998 podStartE2EDuration="45.852022883s" podCreationTimestamp="2025-10-02 18:23:30 +0000 UTC" firstStartedPulling="2025-10-02 18:23:32.380948393 +0000 UTC m=+169.350391265" lastFinishedPulling="2025-10-02 18:24:14.887640921 +0000 UTC m=+211.857083793" observedRunningTime="2025-10-02 18:24:15.848898396 +0000 UTC m=+212.818341288" watchObservedRunningTime="2025-10-02 18:24:15.852022883 +0000 UTC m=+212.821465785" Oct 02 18:24:20 crc kubenswrapper[4832]: I1002 18:24:20.944627 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:24:20 crc kubenswrapper[4832]: I1002 18:24:20.945702 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:24:21 crc kubenswrapper[4832]: I1002 18:24:21.363406 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:24:21 crc kubenswrapper[4832]: I1002 18:24:21.530967 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:24:21 crc kubenswrapper[4832]: I1002 18:24:21.531004 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:24:21 crc kubenswrapper[4832]: I1002 18:24:21.585130 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:24:21 crc kubenswrapper[4832]: I1002 18:24:21.853689 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44j5w" event={"ID":"a6d7c663-a450-49ec-a95a-d38d8df2b1cc","Type":"ContainerStarted","Data":"3ddb7f02e3d370ad34f28e26cf62dee8fd4202c1b8e8a8f5fdf99d49752a12aa"} Oct 02 18:24:21 crc kubenswrapper[4832]: I1002 18:24:21.905478 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:24:21 crc kubenswrapper[4832]: I1002 18:24:21.915627 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:24:22 crc kubenswrapper[4832]: I1002 18:24:22.862760 4832 generic.go:334] "Generic (PLEG): container finished" podID="a6d7c663-a450-49ec-a95a-d38d8df2b1cc" containerID="3ddb7f02e3d370ad34f28e26cf62dee8fd4202c1b8e8a8f5fdf99d49752a12aa" exitCode=0 Oct 02 18:24:22 crc kubenswrapper[4832]: I1002 18:24:22.862842 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44j5w" event={"ID":"a6d7c663-a450-49ec-a95a-d38d8df2b1cc","Type":"ContainerDied","Data":"3ddb7f02e3d370ad34f28e26cf62dee8fd4202c1b8e8a8f5fdf99d49752a12aa"} Oct 02 18:24:23 crc kubenswrapper[4832]: I1002 18:24:23.874141 4832 generic.go:334] "Generic (PLEG): container finished" podID="d532f249-806b-4c9d-936e-7504d83f11ae" containerID="e72373c090720e9e51fe0d0f253b7ed91916751ed2577478bb5a6f493fcb3ca3" exitCode=0 Oct 02 18:24:23 crc kubenswrapper[4832]: I1002 18:24:23.874301 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p42pd" event={"ID":"d532f249-806b-4c9d-936e-7504d83f11ae","Type":"ContainerDied","Data":"e72373c090720e9e51fe0d0f253b7ed91916751ed2577478bb5a6f493fcb3ca3"} Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.294466 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2l5x4"] Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.294745 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2l5x4" podUID="359ecf9f-50f0-4941-b8d6-4b3330187bf4" containerName="registry-server" containerID="cri-o://8e91248c49fdf7d590057b7a54ff702ce26d788c50fc99ed705a62dca1e22ec0" gracePeriod=2 Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.772898 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.857633 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359ecf9f-50f0-4941-b8d6-4b3330187bf4-catalog-content\") pod \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\" (UID: \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\") " Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.858438 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jmmc\" (UniqueName: \"kubernetes.io/projected/359ecf9f-50f0-4941-b8d6-4b3330187bf4-kube-api-access-5jmmc\") pod \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\" (UID: \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\") " Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.858526 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359ecf9f-50f0-4941-b8d6-4b3330187bf4-utilities\") pod \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\" (UID: \"359ecf9f-50f0-4941-b8d6-4b3330187bf4\") " Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.859824 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/359ecf9f-50f0-4941-b8d6-4b3330187bf4-utilities" (OuterVolumeSpecName: "utilities") pod "359ecf9f-50f0-4941-b8d6-4b3330187bf4" (UID: "359ecf9f-50f0-4941-b8d6-4b3330187bf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.865502 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359ecf9f-50f0-4941-b8d6-4b3330187bf4-kube-api-access-5jmmc" (OuterVolumeSpecName: "kube-api-access-5jmmc") pod "359ecf9f-50f0-4941-b8d6-4b3330187bf4" (UID: "359ecf9f-50f0-4941-b8d6-4b3330187bf4"). InnerVolumeSpecName "kube-api-access-5jmmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.876447 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/359ecf9f-50f0-4941-b8d6-4b3330187bf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "359ecf9f-50f0-4941-b8d6-4b3330187bf4" (UID: "359ecf9f-50f0-4941-b8d6-4b3330187bf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.884313 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c88zl" event={"ID":"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a","Type":"ContainerStarted","Data":"6a6f7a9dc0b5caf46d0d7614ffb0d41e3c68a3c6f24e7d321589257835e9b8a4"} Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.894689 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p42pd" event={"ID":"d532f249-806b-4c9d-936e-7504d83f11ae","Type":"ContainerStarted","Data":"5433a8cd5162c8209217df85882c16f8c8d9fe821c2d84866a65d1bb61b16877"} Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.900387 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44j5w" event={"ID":"a6d7c663-a450-49ec-a95a-d38d8df2b1cc","Type":"ContainerStarted","Data":"31311a17353045df78f71bf840ade8b93af17577c168a6c7deee499c523286b3"} Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.902585 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86r7x" event={"ID":"8f7511c3-f168-4aa4-ab7a-09e94e1ee900","Type":"ContainerStarted","Data":"c57863473f118db70d8e5439c5f0c55fc1989d85b91d0f076966aa5f4af4c2fe"} Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.909490 4832 generic.go:334] "Generic (PLEG): container finished" podID="359ecf9f-50f0-4941-b8d6-4b3330187bf4" containerID="8e91248c49fdf7d590057b7a54ff702ce26d788c50fc99ed705a62dca1e22ec0" exitCode=0 Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.909547 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2l5x4" event={"ID":"359ecf9f-50f0-4941-b8d6-4b3330187bf4","Type":"ContainerDied","Data":"8e91248c49fdf7d590057b7a54ff702ce26d788c50fc99ed705a62dca1e22ec0"} Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.909570 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2l5x4" event={"ID":"359ecf9f-50f0-4941-b8d6-4b3330187bf4","Type":"ContainerDied","Data":"42985cf5a96e315749e04e00ca3f49362badc3a5f4d4102a3629d22138aa88b2"} Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.909605 4832 scope.go:117] "RemoveContainer" containerID="8e91248c49fdf7d590057b7a54ff702ce26d788c50fc99ed705a62dca1e22ec0" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.909738 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2l5x4" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.929721 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-44j5w" podStartSLOduration=3.996061841 podStartE2EDuration="53.929337822s" podCreationTimestamp="2025-10-02 18:23:31 +0000 UTC" firstStartedPulling="2025-10-02 18:23:34.442491099 +0000 UTC m=+171.411933971" lastFinishedPulling="2025-10-02 18:24:24.37576708 +0000 UTC m=+221.345209952" observedRunningTime="2025-10-02 18:24:24.928077303 +0000 UTC m=+221.897520175" watchObservedRunningTime="2025-10-02 18:24:24.929337822 +0000 UTC m=+221.898780694" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.938611 4832 scope.go:117] "RemoveContainer" containerID="00444445623fc0bf9b89b1bbdd2447d652206c9883a0cd79371f85c7420b2a39" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.953111 4832 scope.go:117] "RemoveContainer" containerID="8eddc9743a8b8508ec3f57c1c84cdabc55e8e2864f338d56a68874d37f8aa268" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.954452 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p42pd" podStartSLOduration=1.9319582400000002 podStartE2EDuration="54.954429556s" podCreationTimestamp="2025-10-02 18:23:30 +0000 UTC" firstStartedPulling="2025-10-02 18:23:31.357541656 +0000 UTC m=+168.326984518" lastFinishedPulling="2025-10-02 18:24:24.380012922 +0000 UTC m=+221.349455834" observedRunningTime="2025-10-02 18:24:24.952396333 +0000 UTC m=+221.921839205" watchObservedRunningTime="2025-10-02 18:24:24.954429556 +0000 UTC m=+221.923872428" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.959963 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359ecf9f-50f0-4941-b8d6-4b3330187bf4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.959997 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jmmc\" (UniqueName: \"kubernetes.io/projected/359ecf9f-50f0-4941-b8d6-4b3330187bf4-kube-api-access-5jmmc\") on node \"crc\" DevicePath \"\"" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.960013 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359ecf9f-50f0-4941-b8d6-4b3330187bf4-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.968559 4832 scope.go:117] "RemoveContainer" containerID="8e91248c49fdf7d590057b7a54ff702ce26d788c50fc99ed705a62dca1e22ec0" Oct 02 18:24:24 crc kubenswrapper[4832]: E1002 18:24:24.969177 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e91248c49fdf7d590057b7a54ff702ce26d788c50fc99ed705a62dca1e22ec0\": container with ID starting with 8e91248c49fdf7d590057b7a54ff702ce26d788c50fc99ed705a62dca1e22ec0 not found: ID does not exist" containerID="8e91248c49fdf7d590057b7a54ff702ce26d788c50fc99ed705a62dca1e22ec0" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.969218 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e91248c49fdf7d590057b7a54ff702ce26d788c50fc99ed705a62dca1e22ec0"} err="failed to get container status \"8e91248c49fdf7d590057b7a54ff702ce26d788c50fc99ed705a62dca1e22ec0\": rpc error: code = NotFound desc = could not find container \"8e91248c49fdf7d590057b7a54ff702ce26d788c50fc99ed705a62dca1e22ec0\": container with ID starting with 8e91248c49fdf7d590057b7a54ff702ce26d788c50fc99ed705a62dca1e22ec0 not found: ID does not exist" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.969284 4832 scope.go:117] "RemoveContainer" containerID="00444445623fc0bf9b89b1bbdd2447d652206c9883a0cd79371f85c7420b2a39" Oct 02 18:24:24 crc kubenswrapper[4832]: E1002 18:24:24.969674 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00444445623fc0bf9b89b1bbdd2447d652206c9883a0cd79371f85c7420b2a39\": container with ID starting with 00444445623fc0bf9b89b1bbdd2447d652206c9883a0cd79371f85c7420b2a39 not found: ID does not exist" containerID="00444445623fc0bf9b89b1bbdd2447d652206c9883a0cd79371f85c7420b2a39" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.969782 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00444445623fc0bf9b89b1bbdd2447d652206c9883a0cd79371f85c7420b2a39"} err="failed to get container status \"00444445623fc0bf9b89b1bbdd2447d652206c9883a0cd79371f85c7420b2a39\": rpc error: code = NotFound desc = could not find container \"00444445623fc0bf9b89b1bbdd2447d652206c9883a0cd79371f85c7420b2a39\": container with ID starting with 00444445623fc0bf9b89b1bbdd2447d652206c9883a0cd79371f85c7420b2a39 not found: ID does not exist" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.969898 4832 scope.go:117] "RemoveContainer" containerID="8eddc9743a8b8508ec3f57c1c84cdabc55e8e2864f338d56a68874d37f8aa268" Oct 02 18:24:24 crc kubenswrapper[4832]: E1002 18:24:24.970336 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eddc9743a8b8508ec3f57c1c84cdabc55e8e2864f338d56a68874d37f8aa268\": container with ID starting with 8eddc9743a8b8508ec3f57c1c84cdabc55e8e2864f338d56a68874d37f8aa268 not found: ID does not exist" containerID="8eddc9743a8b8508ec3f57c1c84cdabc55e8e2864f338d56a68874d37f8aa268" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.970377 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eddc9743a8b8508ec3f57c1c84cdabc55e8e2864f338d56a68874d37f8aa268"} err="failed to get container status \"8eddc9743a8b8508ec3f57c1c84cdabc55e8e2864f338d56a68874d37f8aa268\": rpc error: code = NotFound desc = could not find container \"8eddc9743a8b8508ec3f57c1c84cdabc55e8e2864f338d56a68874d37f8aa268\": container with ID starting with 8eddc9743a8b8508ec3f57c1c84cdabc55e8e2864f338d56a68874d37f8aa268 not found: ID does not exist" Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.987569 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2l5x4"] Oct 02 18:24:24 crc kubenswrapper[4832]: I1002 18:24:24.990962 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2l5x4"] Oct 02 18:24:25 crc kubenswrapper[4832]: I1002 18:24:25.233458 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="359ecf9f-50f0-4941-b8d6-4b3330187bf4" path="/var/lib/kubelet/pods/359ecf9f-50f0-4941-b8d6-4b3330187bf4/volumes" Oct 02 18:24:25 crc kubenswrapper[4832]: I1002 18:24:25.919051 4832 generic.go:334] "Generic (PLEG): container finished" podID="7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" containerID="6a6f7a9dc0b5caf46d0d7614ffb0d41e3c68a3c6f24e7d321589257835e9b8a4" exitCode=0 Oct 02 18:24:25 crc kubenswrapper[4832]: I1002 18:24:25.919180 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c88zl" event={"ID":"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a","Type":"ContainerDied","Data":"6a6f7a9dc0b5caf46d0d7614ffb0d41e3c68a3c6f24e7d321589257835e9b8a4"} Oct 02 18:24:25 crc kubenswrapper[4832]: I1002 18:24:25.928456 4832 generic.go:334] "Generic (PLEG): container finished" podID="8f7511c3-f168-4aa4-ab7a-09e94e1ee900" containerID="c57863473f118db70d8e5439c5f0c55fc1989d85b91d0f076966aa5f4af4c2fe" exitCode=0 Oct 02 18:24:25 crc kubenswrapper[4832]: I1002 18:24:25.928568 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86r7x" event={"ID":"8f7511c3-f168-4aa4-ab7a-09e94e1ee900","Type":"ContainerDied","Data":"c57863473f118db70d8e5439c5f0c55fc1989d85b91d0f076966aa5f4af4c2fe"} Oct 02 18:24:25 crc kubenswrapper[4832]: I1002 18:24:25.932428 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkcs9" event={"ID":"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4","Type":"ContainerStarted","Data":"b568dd816eb6fc4ea2016e62c37398b9e485677450b7877257ce1d679785f426"} Oct 02 18:24:26 crc kubenswrapper[4832]: I1002 18:24:26.875766 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:24:26 crc kubenswrapper[4832]: I1002 18:24:26.875845 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:24:26 crc kubenswrapper[4832]: I1002 18:24:26.875899 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:24:26 crc kubenswrapper[4832]: I1002 18:24:26.876607 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 18:24:26 crc kubenswrapper[4832]: I1002 18:24:26.876695 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c" gracePeriod=600 Oct 02 18:24:26 crc kubenswrapper[4832]: I1002 18:24:26.943748 4832 generic.go:334] "Generic (PLEG): container finished" podID="b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" containerID="b568dd816eb6fc4ea2016e62c37398b9e485677450b7877257ce1d679785f426" exitCode=0 Oct 02 18:24:26 crc kubenswrapper[4832]: I1002 18:24:26.943820 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkcs9" event={"ID":"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4","Type":"ContainerDied","Data":"b568dd816eb6fc4ea2016e62c37398b9e485677450b7877257ce1d679785f426"} Oct 02 18:24:28 crc kubenswrapper[4832]: I1002 18:24:28.958563 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c" exitCode=0 Oct 02 18:24:28 crc kubenswrapper[4832]: I1002 18:24:28.958659 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c"} Oct 02 18:24:30 crc kubenswrapper[4832]: I1002 18:24:30.508372 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:24:30 crc kubenswrapper[4832]: I1002 18:24:30.510848 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:24:30 crc kubenswrapper[4832]: I1002 18:24:30.558596 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:24:30 crc kubenswrapper[4832]: I1002 18:24:30.977611 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"029420f8fd747c5d74aa276bd82319f4ec00978e474c4a1efa16e6ab08101758"} Oct 02 18:24:31 crc kubenswrapper[4832]: I1002 18:24:31.023383 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:24:31 crc kubenswrapper[4832]: I1002 18:24:31.968998 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:24:31 crc kubenswrapper[4832]: I1002 18:24:31.969508 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:24:32 crc kubenswrapper[4832]: I1002 18:24:32.030676 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:24:32 crc kubenswrapper[4832]: I1002 18:24:32.104522 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:24:34 crc kubenswrapper[4832]: I1002 18:24:34.672227 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-44j5w"] Oct 02 18:24:34 crc kubenswrapper[4832]: I1002 18:24:34.672980 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-44j5w" podUID="a6d7c663-a450-49ec-a95a-d38d8df2b1cc" containerName="registry-server" containerID="cri-o://31311a17353045df78f71bf840ade8b93af17577c168a6c7deee499c523286b3" gracePeriod=2 Oct 02 18:24:37 crc kubenswrapper[4832]: I1002 18:24:37.021883 4832 generic.go:334] "Generic (PLEG): container finished" podID="a6d7c663-a450-49ec-a95a-d38d8df2b1cc" containerID="31311a17353045df78f71bf840ade8b93af17577c168a6c7deee499c523286b3" exitCode=0 Oct 02 18:24:37 crc kubenswrapper[4832]: I1002 18:24:37.021980 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44j5w" event={"ID":"a6d7c663-a450-49ec-a95a-d38d8df2b1cc","Type":"ContainerDied","Data":"31311a17353045df78f71bf840ade8b93af17577c168a6c7deee499c523286b3"} Oct 02 18:24:38 crc kubenswrapper[4832]: I1002 18:24:38.758674 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:24:38 crc kubenswrapper[4832]: I1002 18:24:38.870749 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-utilities\") pod \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\" (UID: \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\") " Oct 02 18:24:38 crc kubenswrapper[4832]: I1002 18:24:38.870815 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckj22\" (UniqueName: \"kubernetes.io/projected/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-kube-api-access-ckj22\") pod \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\" (UID: \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\") " Oct 02 18:24:38 crc kubenswrapper[4832]: I1002 18:24:38.870874 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-catalog-content\") pod \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\" (UID: \"a6d7c663-a450-49ec-a95a-d38d8df2b1cc\") " Oct 02 18:24:38 crc kubenswrapper[4832]: I1002 18:24:38.871929 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-utilities" (OuterVolumeSpecName: "utilities") pod "a6d7c663-a450-49ec-a95a-d38d8df2b1cc" (UID: "a6d7c663-a450-49ec-a95a-d38d8df2b1cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:24:38 crc kubenswrapper[4832]: I1002 18:24:38.879782 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-kube-api-access-ckj22" (OuterVolumeSpecName: "kube-api-access-ckj22") pod "a6d7c663-a450-49ec-a95a-d38d8df2b1cc" (UID: "a6d7c663-a450-49ec-a95a-d38d8df2b1cc"). InnerVolumeSpecName "kube-api-access-ckj22". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:24:38 crc kubenswrapper[4832]: I1002 18:24:38.972815 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:24:38 crc kubenswrapper[4832]: I1002 18:24:38.972879 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckj22\" (UniqueName: \"kubernetes.io/projected/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-kube-api-access-ckj22\") on node \"crc\" DevicePath \"\"" Oct 02 18:24:38 crc kubenswrapper[4832]: I1002 18:24:38.995456 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6d7c663-a450-49ec-a95a-d38d8df2b1cc" (UID: "a6d7c663-a450-49ec-a95a-d38d8df2b1cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:24:39 crc kubenswrapper[4832]: I1002 18:24:39.042611 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-44j5w" event={"ID":"a6d7c663-a450-49ec-a95a-d38d8df2b1cc","Type":"ContainerDied","Data":"9aa5c4c38dbe1b8c9d8ac63d926410ed5eda4a0f1430fde32a9cf9a9d23133d9"} Oct 02 18:24:39 crc kubenswrapper[4832]: I1002 18:24:39.042694 4832 scope.go:117] "RemoveContainer" containerID="31311a17353045df78f71bf840ade8b93af17577c168a6c7deee499c523286b3" Oct 02 18:24:39 crc kubenswrapper[4832]: I1002 18:24:39.042697 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-44j5w" Oct 02 18:24:39 crc kubenswrapper[4832]: I1002 18:24:39.069194 4832 scope.go:117] "RemoveContainer" containerID="3ddb7f02e3d370ad34f28e26cf62dee8fd4202c1b8e8a8f5fdf99d49752a12aa" Oct 02 18:24:39 crc kubenswrapper[4832]: I1002 18:24:39.074891 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d7c663-a450-49ec-a95a-d38d8df2b1cc-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:24:39 crc kubenswrapper[4832]: I1002 18:24:39.080686 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-44j5w"] Oct 02 18:24:39 crc kubenswrapper[4832]: I1002 18:24:39.084645 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-44j5w"] Oct 02 18:24:39 crc kubenswrapper[4832]: I1002 18:24:39.095418 4832 scope.go:117] "RemoveContainer" containerID="99a5db85419a75725be5b991907b702903be3ca5dc2db5036de9e54640706aa2" Oct 02 18:24:39 crc kubenswrapper[4832]: I1002 18:24:39.230624 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d7c663-a450-49ec-a95a-d38d8df2b1cc" path="/var/lib/kubelet/pods/a6d7c663-a450-49ec-a95a-d38d8df2b1cc/volumes" Oct 02 18:24:40 crc kubenswrapper[4832]: I1002 18:24:40.050788 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c88zl" event={"ID":"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a","Type":"ContainerStarted","Data":"42822c44f6045595f05ceebc8e9a00305120e170a7ae90a5634b8c5615fbe5e9"} Oct 02 18:24:40 crc kubenswrapper[4832]: I1002 18:24:40.054401 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86r7x" event={"ID":"8f7511c3-f168-4aa4-ab7a-09e94e1ee900","Type":"ContainerStarted","Data":"c8ab3a5c1b420a0fea860fed8a4930f92d3b4f07e3125fc6f8f63ba48afd3055"} Oct 02 18:24:40 crc kubenswrapper[4832]: I1002 18:24:40.056119 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkcs9" event={"ID":"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4","Type":"ContainerStarted","Data":"cbfb07d5a40fb60bcd616df32a49699c00b917695f62f9b07bade0157d6e2714"} Oct 02 18:24:40 crc kubenswrapper[4832]: I1002 18:24:40.058234 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgq75" event={"ID":"cd8c743f-e305-47fe-9858-0c0af2a86ea3","Type":"ContainerStarted","Data":"3ee6c94b30d56b4d2882a13de3e49f89be1760df39fcada752c297f133a8b4b2"} Oct 02 18:24:40 crc kubenswrapper[4832]: I1002 18:24:40.097640 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c88zl" podStartSLOduration=4.312934758 podStartE2EDuration="1m12.09762159s" podCreationTimestamp="2025-10-02 18:23:28 +0000 UTC" firstStartedPulling="2025-10-02 18:23:30.327308496 +0000 UTC m=+167.296751368" lastFinishedPulling="2025-10-02 18:24:38.111995288 +0000 UTC m=+235.081438200" observedRunningTime="2025-10-02 18:24:40.07809707 +0000 UTC m=+237.047539952" watchObservedRunningTime="2025-10-02 18:24:40.09762159 +0000 UTC m=+237.067064462" Oct 02 18:24:40 crc kubenswrapper[4832]: I1002 18:24:40.099872 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-86r7x" podStartSLOduration=10.236429226 podStartE2EDuration="1m13.0998579s" podCreationTimestamp="2025-10-02 18:23:27 +0000 UTC" firstStartedPulling="2025-10-02 18:23:30.301426846 +0000 UTC m=+167.270869718" lastFinishedPulling="2025-10-02 18:24:33.16485552 +0000 UTC m=+230.134298392" observedRunningTime="2025-10-02 18:24:40.097288559 +0000 UTC m=+237.066731441" watchObservedRunningTime="2025-10-02 18:24:40.0998579 +0000 UTC m=+237.069300772" Oct 02 18:24:40 crc kubenswrapper[4832]: I1002 18:24:40.118179 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nkcs9" podStartSLOduration=9.182697607 podStartE2EDuration="1m12.118151292s" podCreationTimestamp="2025-10-02 18:23:28 +0000 UTC" firstStartedPulling="2025-10-02 18:23:30.306305979 +0000 UTC m=+167.275748851" lastFinishedPulling="2025-10-02 18:24:33.241759654 +0000 UTC m=+230.211202536" observedRunningTime="2025-10-02 18:24:40.115575251 +0000 UTC m=+237.085018123" watchObservedRunningTime="2025-10-02 18:24:40.118151292 +0000 UTC m=+237.087594164" Oct 02 18:24:41 crc kubenswrapper[4832]: I1002 18:24:41.067716 4832 generic.go:334] "Generic (PLEG): container finished" podID="cd8c743f-e305-47fe-9858-0c0af2a86ea3" containerID="3ee6c94b30d56b4d2882a13de3e49f89be1760df39fcada752c297f133a8b4b2" exitCode=0 Oct 02 18:24:41 crc kubenswrapper[4832]: I1002 18:24:41.067945 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgq75" event={"ID":"cd8c743f-e305-47fe-9858-0c0af2a86ea3","Type":"ContainerDied","Data":"3ee6c94b30d56b4d2882a13de3e49f89be1760df39fcada752c297f133a8b4b2"} Oct 02 18:24:43 crc kubenswrapper[4832]: I1002 18:24:43.096686 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgq75" event={"ID":"cd8c743f-e305-47fe-9858-0c0af2a86ea3","Type":"ContainerStarted","Data":"b881b14b77cf91e2d766f1af28d537a639ff16869316978ac1502ec08f81b5d3"} Oct 02 18:24:43 crc kubenswrapper[4832]: I1002 18:24:43.120614 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jgq75" podStartSLOduration=3.398229538 podStartE2EDuration="1m15.120578075s" podCreationTimestamp="2025-10-02 18:23:28 +0000 UTC" firstStartedPulling="2025-10-02 18:23:30.309501869 +0000 UTC m=+167.278944741" lastFinishedPulling="2025-10-02 18:24:42.031850406 +0000 UTC m=+239.001293278" observedRunningTime="2025-10-02 18:24:43.117734336 +0000 UTC m=+240.087177208" watchObservedRunningTime="2025-10-02 18:24:43.120578075 +0000 UTC m=+240.090020947" Oct 02 18:24:46 crc kubenswrapper[4832]: I1002 18:24:46.543648 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fgcrc"] Oct 02 18:24:48 crc kubenswrapper[4832]: I1002 18:24:48.322533 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:24:48 crc kubenswrapper[4832]: I1002 18:24:48.322587 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:24:48 crc kubenswrapper[4832]: I1002 18:24:48.390092 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:24:48 crc kubenswrapper[4832]: I1002 18:24:48.495938 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:24:48 crc kubenswrapper[4832]: I1002 18:24:48.495997 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:24:48 crc kubenswrapper[4832]: I1002 18:24:48.548273 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:24:48 crc kubenswrapper[4832]: I1002 18:24:48.710556 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:24:48 crc kubenswrapper[4832]: I1002 18:24:48.710607 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:24:48 crc kubenswrapper[4832]: I1002 18:24:48.752397 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:24:48 crc kubenswrapper[4832]: I1002 18:24:48.904618 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:24:48 crc kubenswrapper[4832]: I1002 18:24:48.904683 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:24:48 crc kubenswrapper[4832]: I1002 18:24:48.951341 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:24:49 crc kubenswrapper[4832]: I1002 18:24:49.174379 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:24:49 crc kubenswrapper[4832]: I1002 18:24:49.176695 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:24:49 crc kubenswrapper[4832]: I1002 18:24:49.177238 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:24:49 crc kubenswrapper[4832]: I1002 18:24:49.189143 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:24:51 crc kubenswrapper[4832]: I1002 18:24:51.194505 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c88zl"] Oct 02 18:24:51 crc kubenswrapper[4832]: I1002 18:24:51.195124 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c88zl" podUID="7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" containerName="registry-server" containerID="cri-o://42822c44f6045595f05ceebc8e9a00305120e170a7ae90a5634b8c5615fbe5e9" gracePeriod=2 Oct 02 18:24:53 crc kubenswrapper[4832]: I1002 18:24:53.153716 4832 generic.go:334] "Generic (PLEG): container finished" podID="7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" containerID="42822c44f6045595f05ceebc8e9a00305120e170a7ae90a5634b8c5615fbe5e9" exitCode=0 Oct 02 18:24:53 crc kubenswrapper[4832]: I1002 18:24:53.153947 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c88zl" event={"ID":"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a","Type":"ContainerDied","Data":"42822c44f6045595f05ceebc8e9a00305120e170a7ae90a5634b8c5615fbe5e9"} Oct 02 18:24:53 crc kubenswrapper[4832]: I1002 18:24:53.282497 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:24:53 crc kubenswrapper[4832]: I1002 18:24:53.391365 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh6c2\" (UniqueName: \"kubernetes.io/projected/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-kube-api-access-fh6c2\") pod \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\" (UID: \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\") " Oct 02 18:24:53 crc kubenswrapper[4832]: I1002 18:24:53.391447 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-utilities\") pod \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\" (UID: \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\") " Oct 02 18:24:53 crc kubenswrapper[4832]: I1002 18:24:53.391508 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-catalog-content\") pod \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\" (UID: \"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a\") " Oct 02 18:24:53 crc kubenswrapper[4832]: I1002 18:24:53.392433 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-utilities" (OuterVolumeSpecName: "utilities") pod "7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" (UID: "7ff7bbb4-a78b-4f19-a838-735f3afd9e4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:24:53 crc kubenswrapper[4832]: I1002 18:24:53.398346 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-kube-api-access-fh6c2" (OuterVolumeSpecName: "kube-api-access-fh6c2") pod "7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" (UID: "7ff7bbb4-a78b-4f19-a838-735f3afd9e4a"). InnerVolumeSpecName "kube-api-access-fh6c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:24:53 crc kubenswrapper[4832]: I1002 18:24:53.443870 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" (UID: "7ff7bbb4-a78b-4f19-a838-735f3afd9e4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:24:53 crc kubenswrapper[4832]: I1002 18:24:53.492878 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:24:53 crc kubenswrapper[4832]: I1002 18:24:53.493108 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:24:53 crc kubenswrapper[4832]: I1002 18:24:53.493195 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh6c2\" (UniqueName: \"kubernetes.io/projected/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a-kube-api-access-fh6c2\") on node \"crc\" DevicePath \"\"" Oct 02 18:24:53 crc kubenswrapper[4832]: I1002 18:24:53.594617 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nkcs9"] Oct 02 18:24:53 crc kubenswrapper[4832]: I1002 18:24:53.595179 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nkcs9" podUID="b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" containerName="registry-server" containerID="cri-o://cbfb07d5a40fb60bcd616df32a49699c00b917695f62f9b07bade0157d6e2714" gracePeriod=2 Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.163966 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c88zl" event={"ID":"7ff7bbb4-a78b-4f19-a838-735f3afd9e4a","Type":"ContainerDied","Data":"a06e17f211169cece3857613494bbfbc42b778b1f2f790f59198e7ac4cdcfea3"} Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.164035 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c88zl" Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.165396 4832 scope.go:117] "RemoveContainer" containerID="42822c44f6045595f05ceebc8e9a00305120e170a7ae90a5634b8c5615fbe5e9" Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.170339 4832 generic.go:334] "Generic (PLEG): container finished" podID="b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" containerID="cbfb07d5a40fb60bcd616df32a49699c00b917695f62f9b07bade0157d6e2714" exitCode=0 Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.170406 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkcs9" event={"ID":"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4","Type":"ContainerDied","Data":"cbfb07d5a40fb60bcd616df32a49699c00b917695f62f9b07bade0157d6e2714"} Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.191591 4832 scope.go:117] "RemoveContainer" containerID="6a6f7a9dc0b5caf46d0d7614ffb0d41e3c68a3c6f24e7d321589257835e9b8a4" Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.210233 4832 scope.go:117] "RemoveContainer" containerID="5f44b4befa1d4ca1577e9167bbaf04ab084fd8433f621144f670397b0c9fe205" Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.219324 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c88zl"] Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.221572 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c88zl"] Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.514678 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.613011 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-catalog-content\") pod \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\" (UID: \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\") " Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.613074 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fblvk\" (UniqueName: \"kubernetes.io/projected/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-kube-api-access-fblvk\") pod \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\" (UID: \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\") " Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.613143 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-utilities\") pod \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\" (UID: \"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4\") " Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.614244 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-utilities" (OuterVolumeSpecName: "utilities") pod "b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" (UID: "b31d21a6-5799-4aaa-ae96-b82dadcdc6c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.620231 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-kube-api-access-fblvk" (OuterVolumeSpecName: "kube-api-access-fblvk") pod "b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" (UID: "b31d21a6-5799-4aaa-ae96-b82dadcdc6c4"). InnerVolumeSpecName "kube-api-access-fblvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.658640 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" (UID: "b31d21a6-5799-4aaa-ae96-b82dadcdc6c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.714403 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.714457 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fblvk\" (UniqueName: \"kubernetes.io/projected/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-kube-api-access-fblvk\") on node \"crc\" DevicePath \"\"" Oct 02 18:24:54 crc kubenswrapper[4832]: I1002 18:24:54.714476 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:24:55 crc kubenswrapper[4832]: I1002 18:24:55.185068 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nkcs9" event={"ID":"b31d21a6-5799-4aaa-ae96-b82dadcdc6c4","Type":"ContainerDied","Data":"2d99d5d2b4f05f52efcec611536229db28492461e87fecc71c316721e611ba51"} Oct 02 18:24:55 crc kubenswrapper[4832]: I1002 18:24:55.185149 4832 scope.go:117] "RemoveContainer" containerID="cbfb07d5a40fb60bcd616df32a49699c00b917695f62f9b07bade0157d6e2714" Oct 02 18:24:55 crc kubenswrapper[4832]: I1002 18:24:55.185145 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nkcs9" Oct 02 18:24:55 crc kubenswrapper[4832]: I1002 18:24:55.220205 4832 scope.go:117] "RemoveContainer" containerID="b568dd816eb6fc4ea2016e62c37398b9e485677450b7877257ce1d679785f426" Oct 02 18:24:55 crc kubenswrapper[4832]: I1002 18:24:55.230219 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" path="/var/lib/kubelet/pods/7ff7bbb4-a78b-4f19-a838-735f3afd9e4a/volumes" Oct 02 18:24:55 crc kubenswrapper[4832]: I1002 18:24:55.231113 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nkcs9"] Oct 02 18:24:55 crc kubenswrapper[4832]: I1002 18:24:55.231143 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nkcs9"] Oct 02 18:24:55 crc kubenswrapper[4832]: I1002 18:24:55.235807 4832 scope.go:117] "RemoveContainer" containerID="392b4ceeebe30b0b5b9310aa02bb09f5f6c9ce2dd679391e0a1778d9f3164c11" Oct 02 18:24:57 crc kubenswrapper[4832]: I1002 18:24:57.230823 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" path="/var/lib/kubelet/pods/b31d21a6-5799-4aaa-ae96-b82dadcdc6c4/volumes" Oct 02 18:25:12 crc kubenswrapper[4832]: I1002 18:25:12.139034 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" podUID="f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" containerName="oauth-openshift" containerID="cri-o://a300fbdf90576014c4b4317376268e273fb0236e0b2ff19b3b65024f5232b85c" gracePeriod=15 Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.085873 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123031 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7c89776f78-tbsq4"] Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123369 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" containerName="extract-utilities" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123389 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" containerName="extract-utilities" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123408 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d7c663-a450-49ec-a95a-d38d8df2b1cc" containerName="extract-utilities" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123419 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d7c663-a450-49ec-a95a-d38d8df2b1cc" containerName="extract-utilities" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123436 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d7c663-a450-49ec-a95a-d38d8df2b1cc" containerName="registry-server" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123445 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d7c663-a450-49ec-a95a-d38d8df2b1cc" containerName="registry-server" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123456 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" containerName="oauth-openshift" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123466 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" containerName="oauth-openshift" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123476 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d7c663-a450-49ec-a95a-d38d8df2b1cc" containerName="extract-content" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123487 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d7c663-a450-49ec-a95a-d38d8df2b1cc" containerName="extract-content" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123504 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359ecf9f-50f0-4941-b8d6-4b3330187bf4" containerName="extract-content" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123515 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="359ecf9f-50f0-4941-b8d6-4b3330187bf4" containerName="extract-content" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123527 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359ecf9f-50f0-4941-b8d6-4b3330187bf4" containerName="registry-server" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123538 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="359ecf9f-50f0-4941-b8d6-4b3330187bf4" containerName="registry-server" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123548 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" containerName="extract-content" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123556 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" containerName="extract-content" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123568 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" containerName="registry-server" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123577 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" containerName="registry-server" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123594 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" containerName="extract-utilities" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123606 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" containerName="extract-utilities" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123620 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07af2170-0444-4cb4-9ec0-3fd7b7265d99" containerName="pruner" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123630 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="07af2170-0444-4cb4-9ec0-3fd7b7265d99" containerName="pruner" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123649 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" containerName="registry-server" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123659 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" containerName="registry-server" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123672 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359ecf9f-50f0-4941-b8d6-4b3330187bf4" containerName="extract-utilities" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123683 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="359ecf9f-50f0-4941-b8d6-4b3330187bf4" containerName="extract-utilities" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123694 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd" containerName="pruner" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123705 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd" containerName="pruner" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.123723 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" containerName="extract-content" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123734 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" containerName="extract-content" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123907 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="07af2170-0444-4cb4-9ec0-3fd7b7265d99" containerName="pruner" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123923 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="57ac4c17-ddb7-4bd6-b0b4-af131eb2f9cd" containerName="pruner" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123937 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b31d21a6-5799-4aaa-ae96-b82dadcdc6c4" containerName="registry-server" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123952 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d7c663-a450-49ec-a95a-d38d8df2b1cc" containerName="registry-server" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123967 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="359ecf9f-50f0-4941-b8d6-4b3330187bf4" containerName="registry-server" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123979 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff7bbb4-a78b-4f19-a838-735f3afd9e4a" containerName="registry-server" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.123994 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" containerName="oauth-openshift" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.125225 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.144338 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c89776f78-tbsq4"] Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.189798 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-audit-policies\") pod \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.189851 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-service-ca\") pod \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.189901 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-trusted-ca-bundle\") pod \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.189944 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-audit-dir\") pod \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.189967 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-session\") pod \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.190054 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-error\") pod \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.190035 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" (UID: "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.190096 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-ocp-branding-template\") pod \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.190292 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-login\") pod \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.190770 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" (UID: "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.190801 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" (UID: "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191124 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-cliconfig\") pod \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191159 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-provider-selection\") pod \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191186 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-router-certs\") pod \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191342 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-idp-0-file-data\") pod \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191367 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-serving-cert\") pod \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191391 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s7lg\" (UniqueName: \"kubernetes.io/projected/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-kube-api-access-5s7lg\") pod \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\" (UID: \"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1\") " Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191454 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" (UID: "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191572 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191608 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191657 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-user-template-error\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191682 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191737 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-session\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191765 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2fbe6430-5ce1-442e-9a47-5610d801d5c8-audit-dir\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191795 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191821 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191858 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191882 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2fbe6430-5ce1-442e-9a47-5610d801d5c8-audit-policies\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191909 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191934 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-user-template-login\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.191986 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m95fd\" (UniqueName: \"kubernetes.io/projected/2fbe6430-5ce1-442e-9a47-5610d801d5c8-kube-api-access-m95fd\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.192030 4832 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.192047 4832 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.192042 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" (UID: "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.192061 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.192166 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.197469 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" (UID: "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.198208 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" (UID: "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.198229 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-kube-api-access-5s7lg" (OuterVolumeSpecName: "kube-api-access-5s7lg") pod "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" (UID: "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1"). InnerVolumeSpecName "kube-api-access-5s7lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.198448 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" (UID: "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.198691 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" (UID: "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.198967 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" (UID: "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.200722 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" (UID: "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.200848 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" (UID: "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.202777 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" (UID: "f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.293548 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2fbe6430-5ce1-442e-9a47-5610d801d5c8-audit-policies\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294150 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294188 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294215 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294248 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-user-template-login\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294308 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m95fd\" (UniqueName: \"kubernetes.io/projected/2fbe6430-5ce1-442e-9a47-5610d801d5c8-kube-api-access-m95fd\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294372 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294409 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294436 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-user-template-error\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294500 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-session\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294536 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2fbe6430-5ce1-442e-9a47-5610d801d5c8-audit-dir\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294582 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294620 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294691 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294712 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294730 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294753 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294773 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294797 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294822 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294840 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294858 4832 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294876 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s7lg\" (UniqueName: \"kubernetes.io/projected/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1-kube-api-access-5s7lg\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.294875 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2fbe6430-5ce1-442e-9a47-5610d801d5c8-audit-policies\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.295436 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2fbe6430-5ce1-442e-9a47-5610d801d5c8-audit-dir\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.296073 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-service-ca\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.296943 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.297767 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.299004 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.299214 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.300370 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.300444 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-user-template-login\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.300702 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.302091 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-user-template-error\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.303856 4832 generic.go:334] "Generic (PLEG): container finished" podID="f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" containerID="a300fbdf90576014c4b4317376268e273fb0236e0b2ff19b3b65024f5232b85c" exitCode=0 Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.303895 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" event={"ID":"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1","Type":"ContainerDied","Data":"a300fbdf90576014c4b4317376268e273fb0236e0b2ff19b3b65024f5232b85c"} Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.303924 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" event={"ID":"f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1","Type":"ContainerDied","Data":"7bd19dd68c94df6919c2991690a782a3a4702695b813ebe0cba9804db29e33a3"} Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.303941 4832 scope.go:117] "RemoveContainer" containerID="a300fbdf90576014c4b4317376268e273fb0236e0b2ff19b3b65024f5232b85c" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.304060 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fgcrc" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.305464 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-router-certs\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.305633 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2fbe6430-5ce1-442e-9a47-5610d801d5c8-v4-0-config-system-session\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.323216 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m95fd\" (UniqueName: \"kubernetes.io/projected/2fbe6430-5ce1-442e-9a47-5610d801d5c8-kube-api-access-m95fd\") pod \"oauth-openshift-7c89776f78-tbsq4\" (UID: \"2fbe6430-5ce1-442e-9a47-5610d801d5c8\") " pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.360091 4832 scope.go:117] "RemoveContainer" containerID="a300fbdf90576014c4b4317376268e273fb0236e0b2ff19b3b65024f5232b85c" Oct 02 18:25:13 crc kubenswrapper[4832]: E1002 18:25:13.360583 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a300fbdf90576014c4b4317376268e273fb0236e0b2ff19b3b65024f5232b85c\": container with ID starting with a300fbdf90576014c4b4317376268e273fb0236e0b2ff19b3b65024f5232b85c not found: ID does not exist" containerID="a300fbdf90576014c4b4317376268e273fb0236e0b2ff19b3b65024f5232b85c" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.360626 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a300fbdf90576014c4b4317376268e273fb0236e0b2ff19b3b65024f5232b85c"} err="failed to get container status \"a300fbdf90576014c4b4317376268e273fb0236e0b2ff19b3b65024f5232b85c\": rpc error: code = NotFound desc = could not find container \"a300fbdf90576014c4b4317376268e273fb0236e0b2ff19b3b65024f5232b85c\": container with ID starting with a300fbdf90576014c4b4317376268e273fb0236e0b2ff19b3b65024f5232b85c not found: ID does not exist" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.363590 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fgcrc"] Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.368750 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fgcrc"] Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.462445 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:13 crc kubenswrapper[4832]: I1002 18:25:13.923522 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7c89776f78-tbsq4"] Oct 02 18:25:14 crc kubenswrapper[4832]: I1002 18:25:14.315732 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" event={"ID":"2fbe6430-5ce1-442e-9a47-5610d801d5c8","Type":"ContainerStarted","Data":"266c4074822c8e87bb2c61bee88e73330e0b1eb2c8a354a7aafa91463673bc75"} Oct 02 18:25:14 crc kubenswrapper[4832]: I1002 18:25:14.316325 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" event={"ID":"2fbe6430-5ce1-442e-9a47-5610d801d5c8","Type":"ContainerStarted","Data":"372837764e46711851e458ae9c9f81042e6f2532abbb25cc7d6d119be0a2c6c2"} Oct 02 18:25:14 crc kubenswrapper[4832]: I1002 18:25:14.316455 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:14 crc kubenswrapper[4832]: I1002 18:25:14.531257 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" Oct 02 18:25:14 crc kubenswrapper[4832]: I1002 18:25:14.548256 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7c89776f78-tbsq4" podStartSLOduration=28.548224831 podStartE2EDuration="28.548224831s" podCreationTimestamp="2025-10-02 18:24:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:25:14.345143424 +0000 UTC m=+271.314586296" watchObservedRunningTime="2025-10-02 18:25:14.548224831 +0000 UTC m=+271.517667723" Oct 02 18:25:15 crc kubenswrapper[4832]: I1002 18:25:15.236068 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1" path="/var/lib/kubelet/pods/f65c7c0d-1fa7-4512-bdfb-fa6e072bb4f1/volumes" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.397765 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jgq75"] Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.398862 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jgq75" podUID="cd8c743f-e305-47fe-9858-0c0af2a86ea3" containerName="registry-server" containerID="cri-o://b881b14b77cf91e2d766f1af28d537a639ff16869316978ac1502ec08f81b5d3" gracePeriod=30 Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.402376 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-86r7x"] Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.402613 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-86r7x" podUID="8f7511c3-f168-4aa4-ab7a-09e94e1ee900" containerName="registry-server" containerID="cri-o://c8ab3a5c1b420a0fea860fed8a4930f92d3b4f07e3125fc6f8f63ba48afd3055" gracePeriod=30 Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.408381 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m4vv7"] Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.408644 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" podUID="e1ccc88e-b013-4c52-92b1-6e6462492c3c" containerName="marketplace-operator" containerID="cri-o://0568b56757ae8e63f0bed694be35690c26741d0c04f5494ee735a4b4018100a5" gracePeriod=30 Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.420809 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p42pd"] Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.421077 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p42pd" podUID="d532f249-806b-4c9d-936e-7504d83f11ae" containerName="registry-server" containerID="cri-o://5433a8cd5162c8209217df85882c16f8c8d9fe821c2d84866a65d1bb61b16877" gracePeriod=30 Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.433166 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vpnm2"] Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.434228 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.435431 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fcwtc"] Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.435712 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fcwtc" podUID="45cecbc0-ddb6-4cc1-b2d8-892f598086a5" containerName="registry-server" containerID="cri-o://0f8f2e1d606e1fb8db362c242ca6e03a300054c9daf88de9f0b591762814aeb5" gracePeriod=30 Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.452080 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vpnm2"] Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.472834 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4deba2ec-10ea-48dd-b732-a924f01ab1b7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vpnm2\" (UID: \"4deba2ec-10ea-48dd-b732-a924f01ab1b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.472919 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpx4z\" (UniqueName: \"kubernetes.io/projected/4deba2ec-10ea-48dd-b732-a924f01ab1b7-kube-api-access-qpx4z\") pod \"marketplace-operator-79b997595-vpnm2\" (UID: \"4deba2ec-10ea-48dd-b732-a924f01ab1b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.472945 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4deba2ec-10ea-48dd-b732-a924f01ab1b7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vpnm2\" (UID: \"4deba2ec-10ea-48dd-b732-a924f01ab1b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.574008 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpx4z\" (UniqueName: \"kubernetes.io/projected/4deba2ec-10ea-48dd-b732-a924f01ab1b7-kube-api-access-qpx4z\") pod \"marketplace-operator-79b997595-vpnm2\" (UID: \"4deba2ec-10ea-48dd-b732-a924f01ab1b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.574067 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4deba2ec-10ea-48dd-b732-a924f01ab1b7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vpnm2\" (UID: \"4deba2ec-10ea-48dd-b732-a924f01ab1b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.574125 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4deba2ec-10ea-48dd-b732-a924f01ab1b7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vpnm2\" (UID: \"4deba2ec-10ea-48dd-b732-a924f01ab1b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.576591 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4deba2ec-10ea-48dd-b732-a924f01ab1b7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vpnm2\" (UID: \"4deba2ec-10ea-48dd-b732-a924f01ab1b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.579840 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4deba2ec-10ea-48dd-b732-a924f01ab1b7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vpnm2\" (UID: \"4deba2ec-10ea-48dd-b732-a924f01ab1b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.596198 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpx4z\" (UniqueName: \"kubernetes.io/projected/4deba2ec-10ea-48dd-b732-a924f01ab1b7-kube-api-access-qpx4z\") pod \"marketplace-operator-79b997595-vpnm2\" (UID: \"4deba2ec-10ea-48dd-b732-a924f01ab1b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.770891 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.832210 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.834663 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.859682 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.860117 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.860703 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.877870 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-catalog-content\") pod \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\" (UID: \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.877950 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e1ccc88e-b013-4c52-92b1-6e6462492c3c-marketplace-operator-metrics\") pod \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\" (UID: \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.878001 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1ccc88e-b013-4c52-92b1-6e6462492c3c-marketplace-trusted-ca\") pod \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\" (UID: \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.878033 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4brh4\" (UniqueName: \"kubernetes.io/projected/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-kube-api-access-4brh4\") pod \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\" (UID: \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.878054 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4jns\" (UniqueName: \"kubernetes.io/projected/e1ccc88e-b013-4c52-92b1-6e6462492c3c-kube-api-access-q4jns\") pod \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\" (UID: \"e1ccc88e-b013-4c52-92b1-6e6462492c3c\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.878075 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-utilities\") pod \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\" (UID: \"8f7511c3-f168-4aa4-ab7a-09e94e1ee900\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.880814 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-utilities" (OuterVolumeSpecName: "utilities") pod "8f7511c3-f168-4aa4-ab7a-09e94e1ee900" (UID: "8f7511c3-f168-4aa4-ab7a-09e94e1ee900"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.882328 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1ccc88e-b013-4c52-92b1-6e6462492c3c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e1ccc88e-b013-4c52-92b1-6e6462492c3c" (UID: "e1ccc88e-b013-4c52-92b1-6e6462492c3c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.886116 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1ccc88e-b013-4c52-92b1-6e6462492c3c-kube-api-access-q4jns" (OuterVolumeSpecName: "kube-api-access-q4jns") pod "e1ccc88e-b013-4c52-92b1-6e6462492c3c" (UID: "e1ccc88e-b013-4c52-92b1-6e6462492c3c"). InnerVolumeSpecName "kube-api-access-q4jns". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.890197 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-kube-api-access-4brh4" (OuterVolumeSpecName: "kube-api-access-4brh4") pod "8f7511c3-f168-4aa4-ab7a-09e94e1ee900" (UID: "8f7511c3-f168-4aa4-ab7a-09e94e1ee900"). InnerVolumeSpecName "kube-api-access-4brh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.890575 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ccc88e-b013-4c52-92b1-6e6462492c3c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e1ccc88e-b013-4c52-92b1-6e6462492c3c" (UID: "e1ccc88e-b013-4c52-92b1-6e6462492c3c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.951428 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f7511c3-f168-4aa4-ab7a-09e94e1ee900" (UID: "8f7511c3-f168-4aa4-ab7a-09e94e1ee900"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.979924 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8c743f-e305-47fe-9858-0c0af2a86ea3-utilities\") pod \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\" (UID: \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.979995 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d532f249-806b-4c9d-936e-7504d83f11ae-utilities\") pod \"d532f249-806b-4c9d-936e-7504d83f11ae\" (UID: \"d532f249-806b-4c9d-936e-7504d83f11ae\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.980053 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz4vb\" (UniqueName: \"kubernetes.io/projected/d532f249-806b-4c9d-936e-7504d83f11ae-kube-api-access-tz4vb\") pod \"d532f249-806b-4c9d-936e-7504d83f11ae\" (UID: \"d532f249-806b-4c9d-936e-7504d83f11ae\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.980094 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-catalog-content\") pod \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\" (UID: \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.980157 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-utilities\") pod \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\" (UID: \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.980187 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d532f249-806b-4c9d-936e-7504d83f11ae-catalog-content\") pod \"d532f249-806b-4c9d-936e-7504d83f11ae\" (UID: \"d532f249-806b-4c9d-936e-7504d83f11ae\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.980215 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbp67\" (UniqueName: \"kubernetes.io/projected/cd8c743f-e305-47fe-9858-0c0af2a86ea3-kube-api-access-gbp67\") pod \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\" (UID: \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.980249 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcdqc\" (UniqueName: \"kubernetes.io/projected/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-kube-api-access-bcdqc\") pod \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\" (UID: \"45cecbc0-ddb6-4cc1-b2d8-892f598086a5\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.980360 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8c743f-e305-47fe-9858-0c0af2a86ea3-catalog-content\") pod \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\" (UID: \"cd8c743f-e305-47fe-9858-0c0af2a86ea3\") " Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.980594 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1ccc88e-b013-4c52-92b1-6e6462492c3c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.980622 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4brh4\" (UniqueName: \"kubernetes.io/projected/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-kube-api-access-4brh4\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.980633 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4jns\" (UniqueName: \"kubernetes.io/projected/e1ccc88e-b013-4c52-92b1-6e6462492c3c-kube-api-access-q4jns\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.980647 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.980660 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f7511c3-f168-4aa4-ab7a-09e94e1ee900-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.980671 4832 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e1ccc88e-b013-4c52-92b1-6e6462492c3c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.980733 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d532f249-806b-4c9d-936e-7504d83f11ae-utilities" (OuterVolumeSpecName: "utilities") pod "d532f249-806b-4c9d-936e-7504d83f11ae" (UID: "d532f249-806b-4c9d-936e-7504d83f11ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.981188 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8c743f-e305-47fe-9858-0c0af2a86ea3-utilities" (OuterVolumeSpecName: "utilities") pod "cd8c743f-e305-47fe-9858-0c0af2a86ea3" (UID: "cd8c743f-e305-47fe-9858-0c0af2a86ea3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.981462 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-utilities" (OuterVolumeSpecName: "utilities") pod "45cecbc0-ddb6-4cc1-b2d8-892f598086a5" (UID: "45cecbc0-ddb6-4cc1-b2d8-892f598086a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.986403 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d532f249-806b-4c9d-936e-7504d83f11ae-kube-api-access-tz4vb" (OuterVolumeSpecName: "kube-api-access-tz4vb") pod "d532f249-806b-4c9d-936e-7504d83f11ae" (UID: "d532f249-806b-4c9d-936e-7504d83f11ae"). InnerVolumeSpecName "kube-api-access-tz4vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.989593 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8c743f-e305-47fe-9858-0c0af2a86ea3-kube-api-access-gbp67" (OuterVolumeSpecName: "kube-api-access-gbp67") pod "cd8c743f-e305-47fe-9858-0c0af2a86ea3" (UID: "cd8c743f-e305-47fe-9858-0c0af2a86ea3"). InnerVolumeSpecName "kube-api-access-gbp67". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:25:26 crc kubenswrapper[4832]: I1002 18:25:26.993998 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-kube-api-access-bcdqc" (OuterVolumeSpecName: "kube-api-access-bcdqc") pod "45cecbc0-ddb6-4cc1-b2d8-892f598086a5" (UID: "45cecbc0-ddb6-4cc1-b2d8-892f598086a5"). InnerVolumeSpecName "kube-api-access-bcdqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.004594 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d532f249-806b-4c9d-936e-7504d83f11ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d532f249-806b-4c9d-936e-7504d83f11ae" (UID: "d532f249-806b-4c9d-936e-7504d83f11ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.037925 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8c743f-e305-47fe-9858-0c0af2a86ea3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd8c743f-e305-47fe-9858-0c0af2a86ea3" (UID: "cd8c743f-e305-47fe-9858-0c0af2a86ea3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.082162 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d532f249-806b-4c9d-936e-7504d83f11ae-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.082200 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz4vb\" (UniqueName: \"kubernetes.io/projected/d532f249-806b-4c9d-936e-7504d83f11ae-kube-api-access-tz4vb\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.082212 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.082224 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d532f249-806b-4c9d-936e-7504d83f11ae-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.082236 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbp67\" (UniqueName: \"kubernetes.io/projected/cd8c743f-e305-47fe-9858-0c0af2a86ea3-kube-api-access-gbp67\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.082249 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcdqc\" (UniqueName: \"kubernetes.io/projected/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-kube-api-access-bcdqc\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.082313 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8c743f-e305-47fe-9858-0c0af2a86ea3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.082326 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8c743f-e305-47fe-9858-0c0af2a86ea3-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.084865 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45cecbc0-ddb6-4cc1-b2d8-892f598086a5" (UID: "45cecbc0-ddb6-4cc1-b2d8-892f598086a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.183226 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45cecbc0-ddb6-4cc1-b2d8-892f598086a5-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.193527 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vpnm2"] Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.390367 4832 generic.go:334] "Generic (PLEG): container finished" podID="cd8c743f-e305-47fe-9858-0c0af2a86ea3" containerID="b881b14b77cf91e2d766f1af28d537a639ff16869316978ac1502ec08f81b5d3" exitCode=0 Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.390441 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgq75" event={"ID":"cd8c743f-e305-47fe-9858-0c0af2a86ea3","Type":"ContainerDied","Data":"b881b14b77cf91e2d766f1af28d537a639ff16869316978ac1502ec08f81b5d3"} Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.390445 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgq75" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.390480 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgq75" event={"ID":"cd8c743f-e305-47fe-9858-0c0af2a86ea3","Type":"ContainerDied","Data":"c370165ce552b3130e05e3abe54b112d3f4ae4d3a0fe9920808c6abdef26cf86"} Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.390502 4832 scope.go:117] "RemoveContainer" containerID="b881b14b77cf91e2d766f1af28d537a639ff16869316978ac1502ec08f81b5d3" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.392563 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" event={"ID":"4deba2ec-10ea-48dd-b732-a924f01ab1b7","Type":"ContainerStarted","Data":"20110ea384aedaa5b378917db78ab57dda3cb6748c367e73bdb28d6ff3d2cfaf"} Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.392616 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" event={"ID":"4deba2ec-10ea-48dd-b732-a924f01ab1b7","Type":"ContainerStarted","Data":"84b82be067903a800f4ee2c4aa05225a57bc4481a1411a03f980b5381017d1d8"} Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.392774 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.394039 4832 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vpnm2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.394089 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" podUID="4deba2ec-10ea-48dd-b732-a924f01ab1b7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.404788 4832 generic.go:334] "Generic (PLEG): container finished" podID="d532f249-806b-4c9d-936e-7504d83f11ae" containerID="5433a8cd5162c8209217df85882c16f8c8d9fe821c2d84866a65d1bb61b16877" exitCode=0 Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.404855 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p42pd" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.404846 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p42pd" event={"ID":"d532f249-806b-4c9d-936e-7504d83f11ae","Type":"ContainerDied","Data":"5433a8cd5162c8209217df85882c16f8c8d9fe821c2d84866a65d1bb61b16877"} Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.407434 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p42pd" event={"ID":"d532f249-806b-4c9d-936e-7504d83f11ae","Type":"ContainerDied","Data":"69c22821cac87eb61390fd7a54620ac6ce2266ef7d87cd6a5eb1f0f3930dd4af"} Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.405241 4832 scope.go:117] "RemoveContainer" containerID="3ee6c94b30d56b4d2882a13de3e49f89be1760df39fcada752c297f133a8b4b2" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.413101 4832 generic.go:334] "Generic (PLEG): container finished" podID="45cecbc0-ddb6-4cc1-b2d8-892f598086a5" containerID="0f8f2e1d606e1fb8db362c242ca6e03a300054c9daf88de9f0b591762814aeb5" exitCode=0 Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.413141 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcwtc" event={"ID":"45cecbc0-ddb6-4cc1-b2d8-892f598086a5","Type":"ContainerDied","Data":"0f8f2e1d606e1fb8db362c242ca6e03a300054c9daf88de9f0b591762814aeb5"} Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.413171 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcwtc" event={"ID":"45cecbc0-ddb6-4cc1-b2d8-892f598086a5","Type":"ContainerDied","Data":"c829ac7fd0cb07ecd3bb8d5b5d70aca84e7c8081d5f23b57d8f386c3c3f1f1db"} Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.413183 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fcwtc" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.415429 4832 generic.go:334] "Generic (PLEG): container finished" podID="8f7511c3-f168-4aa4-ab7a-09e94e1ee900" containerID="c8ab3a5c1b420a0fea860fed8a4930f92d3b4f07e3125fc6f8f63ba48afd3055" exitCode=0 Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.415468 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86r7x" event={"ID":"8f7511c3-f168-4aa4-ab7a-09e94e1ee900","Type":"ContainerDied","Data":"c8ab3a5c1b420a0fea860fed8a4930f92d3b4f07e3125fc6f8f63ba48afd3055"} Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.415483 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86r7x" event={"ID":"8f7511c3-f168-4aa4-ab7a-09e94e1ee900","Type":"ContainerDied","Data":"c9ad24a8f973dc5ba25c0d9433814072324c4f89342bf65921d2495df49620a1"} Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.415548 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-86r7x" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.418930 4832 generic.go:334] "Generic (PLEG): container finished" podID="e1ccc88e-b013-4c52-92b1-6e6462492c3c" containerID="0568b56757ae8e63f0bed694be35690c26741d0c04f5494ee735a4b4018100a5" exitCode=0 Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.419017 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" event={"ID":"e1ccc88e-b013-4c52-92b1-6e6462492c3c","Type":"ContainerDied","Data":"0568b56757ae8e63f0bed694be35690c26741d0c04f5494ee735a4b4018100a5"} Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.419049 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" event={"ID":"e1ccc88e-b013-4c52-92b1-6e6462492c3c","Type":"ContainerDied","Data":"3058a08db5f63e1af7a80ae3ee9a111416244f1160797f5eb1181f182e568cce"} Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.419101 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m4vv7" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.423854 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" podStartSLOduration=1.423835199 podStartE2EDuration="1.423835199s" podCreationTimestamp="2025-10-02 18:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:25:27.421168566 +0000 UTC m=+284.390611438" watchObservedRunningTime="2025-10-02 18:25:27.423835199 +0000 UTC m=+284.393278071" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.442342 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jgq75"] Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.443101 4832 scope.go:117] "RemoveContainer" containerID="7ef0ce2e090ab951e6582997f2316e66209d4e3c906686088833a9bc9ca03cb2" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.448884 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jgq75"] Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.464535 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-86r7x"] Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.467776 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-86r7x"] Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.469761 4832 scope.go:117] "RemoveContainer" containerID="b881b14b77cf91e2d766f1af28d537a639ff16869316978ac1502ec08f81b5d3" Oct 02 18:25:27 crc kubenswrapper[4832]: E1002 18:25:27.470183 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b881b14b77cf91e2d766f1af28d537a639ff16869316978ac1502ec08f81b5d3\": container with ID starting with b881b14b77cf91e2d766f1af28d537a639ff16869316978ac1502ec08f81b5d3 not found: ID does not exist" containerID="b881b14b77cf91e2d766f1af28d537a639ff16869316978ac1502ec08f81b5d3" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.470211 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b881b14b77cf91e2d766f1af28d537a639ff16869316978ac1502ec08f81b5d3"} err="failed to get container status \"b881b14b77cf91e2d766f1af28d537a639ff16869316978ac1502ec08f81b5d3\": rpc error: code = NotFound desc = could not find container \"b881b14b77cf91e2d766f1af28d537a639ff16869316978ac1502ec08f81b5d3\": container with ID starting with b881b14b77cf91e2d766f1af28d537a639ff16869316978ac1502ec08f81b5d3 not found: ID does not exist" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.470230 4832 scope.go:117] "RemoveContainer" containerID="3ee6c94b30d56b4d2882a13de3e49f89be1760df39fcada752c297f133a8b4b2" Oct 02 18:25:27 crc kubenswrapper[4832]: E1002 18:25:27.470459 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ee6c94b30d56b4d2882a13de3e49f89be1760df39fcada752c297f133a8b4b2\": container with ID starting with 3ee6c94b30d56b4d2882a13de3e49f89be1760df39fcada752c297f133a8b4b2 not found: ID does not exist" containerID="3ee6c94b30d56b4d2882a13de3e49f89be1760df39fcada752c297f133a8b4b2" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.470480 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ee6c94b30d56b4d2882a13de3e49f89be1760df39fcada752c297f133a8b4b2"} err="failed to get container status \"3ee6c94b30d56b4d2882a13de3e49f89be1760df39fcada752c297f133a8b4b2\": rpc error: code = NotFound desc = could not find container \"3ee6c94b30d56b4d2882a13de3e49f89be1760df39fcada752c297f133a8b4b2\": container with ID starting with 3ee6c94b30d56b4d2882a13de3e49f89be1760df39fcada752c297f133a8b4b2 not found: ID does not exist" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.470493 4832 scope.go:117] "RemoveContainer" containerID="7ef0ce2e090ab951e6582997f2316e66209d4e3c906686088833a9bc9ca03cb2" Oct 02 18:25:27 crc kubenswrapper[4832]: E1002 18:25:27.470841 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ef0ce2e090ab951e6582997f2316e66209d4e3c906686088833a9bc9ca03cb2\": container with ID starting with 7ef0ce2e090ab951e6582997f2316e66209d4e3c906686088833a9bc9ca03cb2 not found: ID does not exist" containerID="7ef0ce2e090ab951e6582997f2316e66209d4e3c906686088833a9bc9ca03cb2" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.470942 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ef0ce2e090ab951e6582997f2316e66209d4e3c906686088833a9bc9ca03cb2"} err="failed to get container status \"7ef0ce2e090ab951e6582997f2316e66209d4e3c906686088833a9bc9ca03cb2\": rpc error: code = NotFound desc = could not find container \"7ef0ce2e090ab951e6582997f2316e66209d4e3c906686088833a9bc9ca03cb2\": container with ID starting with 7ef0ce2e090ab951e6582997f2316e66209d4e3c906686088833a9bc9ca03cb2 not found: ID does not exist" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.471063 4832 scope.go:117] "RemoveContainer" containerID="5433a8cd5162c8209217df85882c16f8c8d9fe821c2d84866a65d1bb61b16877" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.471252 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fcwtc"] Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.475719 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fcwtc"] Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.479345 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p42pd"] Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.482905 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p42pd"] Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.486977 4832 scope.go:117] "RemoveContainer" containerID="e72373c090720e9e51fe0d0f253b7ed91916751ed2577478bb5a6f493fcb3ca3" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.487767 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m4vv7"] Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.490754 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m4vv7"] Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.500956 4832 scope.go:117] "RemoveContainer" containerID="762a47849370e950bdd93a68887548c8e7ad619e307f73f5146ba0f372859a88" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.513088 4832 scope.go:117] "RemoveContainer" containerID="5433a8cd5162c8209217df85882c16f8c8d9fe821c2d84866a65d1bb61b16877" Oct 02 18:25:27 crc kubenswrapper[4832]: E1002 18:25:27.513442 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5433a8cd5162c8209217df85882c16f8c8d9fe821c2d84866a65d1bb61b16877\": container with ID starting with 5433a8cd5162c8209217df85882c16f8c8d9fe821c2d84866a65d1bb61b16877 not found: ID does not exist" containerID="5433a8cd5162c8209217df85882c16f8c8d9fe821c2d84866a65d1bb61b16877" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.513464 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5433a8cd5162c8209217df85882c16f8c8d9fe821c2d84866a65d1bb61b16877"} err="failed to get container status \"5433a8cd5162c8209217df85882c16f8c8d9fe821c2d84866a65d1bb61b16877\": rpc error: code = NotFound desc = could not find container \"5433a8cd5162c8209217df85882c16f8c8d9fe821c2d84866a65d1bb61b16877\": container with ID starting with 5433a8cd5162c8209217df85882c16f8c8d9fe821c2d84866a65d1bb61b16877 not found: ID does not exist" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.513486 4832 scope.go:117] "RemoveContainer" containerID="e72373c090720e9e51fe0d0f253b7ed91916751ed2577478bb5a6f493fcb3ca3" Oct 02 18:25:27 crc kubenswrapper[4832]: E1002 18:25:27.513664 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e72373c090720e9e51fe0d0f253b7ed91916751ed2577478bb5a6f493fcb3ca3\": container with ID starting with e72373c090720e9e51fe0d0f253b7ed91916751ed2577478bb5a6f493fcb3ca3 not found: ID does not exist" containerID="e72373c090720e9e51fe0d0f253b7ed91916751ed2577478bb5a6f493fcb3ca3" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.513681 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e72373c090720e9e51fe0d0f253b7ed91916751ed2577478bb5a6f493fcb3ca3"} err="failed to get container status \"e72373c090720e9e51fe0d0f253b7ed91916751ed2577478bb5a6f493fcb3ca3\": rpc error: code = NotFound desc = could not find container \"e72373c090720e9e51fe0d0f253b7ed91916751ed2577478bb5a6f493fcb3ca3\": container with ID starting with e72373c090720e9e51fe0d0f253b7ed91916751ed2577478bb5a6f493fcb3ca3 not found: ID does not exist" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.513695 4832 scope.go:117] "RemoveContainer" containerID="762a47849370e950bdd93a68887548c8e7ad619e307f73f5146ba0f372859a88" Oct 02 18:25:27 crc kubenswrapper[4832]: E1002 18:25:27.513841 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"762a47849370e950bdd93a68887548c8e7ad619e307f73f5146ba0f372859a88\": container with ID starting with 762a47849370e950bdd93a68887548c8e7ad619e307f73f5146ba0f372859a88 not found: ID does not exist" containerID="762a47849370e950bdd93a68887548c8e7ad619e307f73f5146ba0f372859a88" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.513856 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"762a47849370e950bdd93a68887548c8e7ad619e307f73f5146ba0f372859a88"} err="failed to get container status \"762a47849370e950bdd93a68887548c8e7ad619e307f73f5146ba0f372859a88\": rpc error: code = NotFound desc = could not find container \"762a47849370e950bdd93a68887548c8e7ad619e307f73f5146ba0f372859a88\": container with ID starting with 762a47849370e950bdd93a68887548c8e7ad619e307f73f5146ba0f372859a88 not found: ID does not exist" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.513869 4832 scope.go:117] "RemoveContainer" containerID="0f8f2e1d606e1fb8db362c242ca6e03a300054c9daf88de9f0b591762814aeb5" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.526306 4832 scope.go:117] "RemoveContainer" containerID="62f2e62e9045bb8936e3f2f4e18a117831f3a3a32ba65beaeccde7e30b6861e2" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.585178 4832 scope.go:117] "RemoveContainer" containerID="cb4e6be060b45aa695ba9701ca8b5b58a0f169cf20a32f8e3828749d70a54b55" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.602836 4832 scope.go:117] "RemoveContainer" containerID="0f8f2e1d606e1fb8db362c242ca6e03a300054c9daf88de9f0b591762814aeb5" Oct 02 18:25:27 crc kubenswrapper[4832]: E1002 18:25:27.606185 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f8f2e1d606e1fb8db362c242ca6e03a300054c9daf88de9f0b591762814aeb5\": container with ID starting with 0f8f2e1d606e1fb8db362c242ca6e03a300054c9daf88de9f0b591762814aeb5 not found: ID does not exist" containerID="0f8f2e1d606e1fb8db362c242ca6e03a300054c9daf88de9f0b591762814aeb5" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.606383 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f8f2e1d606e1fb8db362c242ca6e03a300054c9daf88de9f0b591762814aeb5"} err="failed to get container status \"0f8f2e1d606e1fb8db362c242ca6e03a300054c9daf88de9f0b591762814aeb5\": rpc error: code = NotFound desc = could not find container \"0f8f2e1d606e1fb8db362c242ca6e03a300054c9daf88de9f0b591762814aeb5\": container with ID starting with 0f8f2e1d606e1fb8db362c242ca6e03a300054c9daf88de9f0b591762814aeb5 not found: ID does not exist" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.606605 4832 scope.go:117] "RemoveContainer" containerID="62f2e62e9045bb8936e3f2f4e18a117831f3a3a32ba65beaeccde7e30b6861e2" Oct 02 18:25:27 crc kubenswrapper[4832]: E1002 18:25:27.608391 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62f2e62e9045bb8936e3f2f4e18a117831f3a3a32ba65beaeccde7e30b6861e2\": container with ID starting with 62f2e62e9045bb8936e3f2f4e18a117831f3a3a32ba65beaeccde7e30b6861e2 not found: ID does not exist" containerID="62f2e62e9045bb8936e3f2f4e18a117831f3a3a32ba65beaeccde7e30b6861e2" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.609205 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f2e62e9045bb8936e3f2f4e18a117831f3a3a32ba65beaeccde7e30b6861e2"} err="failed to get container status \"62f2e62e9045bb8936e3f2f4e18a117831f3a3a32ba65beaeccde7e30b6861e2\": rpc error: code = NotFound desc = could not find container \"62f2e62e9045bb8936e3f2f4e18a117831f3a3a32ba65beaeccde7e30b6861e2\": container with ID starting with 62f2e62e9045bb8936e3f2f4e18a117831f3a3a32ba65beaeccde7e30b6861e2 not found: ID does not exist" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.609233 4832 scope.go:117] "RemoveContainer" containerID="cb4e6be060b45aa695ba9701ca8b5b58a0f169cf20a32f8e3828749d70a54b55" Oct 02 18:25:27 crc kubenswrapper[4832]: E1002 18:25:27.613892 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb4e6be060b45aa695ba9701ca8b5b58a0f169cf20a32f8e3828749d70a54b55\": container with ID starting with cb4e6be060b45aa695ba9701ca8b5b58a0f169cf20a32f8e3828749d70a54b55 not found: ID does not exist" containerID="cb4e6be060b45aa695ba9701ca8b5b58a0f169cf20a32f8e3828749d70a54b55" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.614031 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb4e6be060b45aa695ba9701ca8b5b58a0f169cf20a32f8e3828749d70a54b55"} err="failed to get container status \"cb4e6be060b45aa695ba9701ca8b5b58a0f169cf20a32f8e3828749d70a54b55\": rpc error: code = NotFound desc = could not find container \"cb4e6be060b45aa695ba9701ca8b5b58a0f169cf20a32f8e3828749d70a54b55\": container with ID starting with cb4e6be060b45aa695ba9701ca8b5b58a0f169cf20a32f8e3828749d70a54b55 not found: ID does not exist" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.614127 4832 scope.go:117] "RemoveContainer" containerID="c8ab3a5c1b420a0fea860fed8a4930f92d3b4f07e3125fc6f8f63ba48afd3055" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.628617 4832 scope.go:117] "RemoveContainer" containerID="c57863473f118db70d8e5439c5f0c55fc1989d85b91d0f076966aa5f4af4c2fe" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.647823 4832 scope.go:117] "RemoveContainer" containerID="8d3bae71e21fe0d9d133e81461c8246f6fa19325d46656b7090cb6f8f95306ce" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.663593 4832 scope.go:117] "RemoveContainer" containerID="c8ab3a5c1b420a0fea860fed8a4930f92d3b4f07e3125fc6f8f63ba48afd3055" Oct 02 18:25:27 crc kubenswrapper[4832]: E1002 18:25:27.664164 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8ab3a5c1b420a0fea860fed8a4930f92d3b4f07e3125fc6f8f63ba48afd3055\": container with ID starting with c8ab3a5c1b420a0fea860fed8a4930f92d3b4f07e3125fc6f8f63ba48afd3055 not found: ID does not exist" containerID="c8ab3a5c1b420a0fea860fed8a4930f92d3b4f07e3125fc6f8f63ba48afd3055" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.664298 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ab3a5c1b420a0fea860fed8a4930f92d3b4f07e3125fc6f8f63ba48afd3055"} err="failed to get container status \"c8ab3a5c1b420a0fea860fed8a4930f92d3b4f07e3125fc6f8f63ba48afd3055\": rpc error: code = NotFound desc = could not find container \"c8ab3a5c1b420a0fea860fed8a4930f92d3b4f07e3125fc6f8f63ba48afd3055\": container with ID starting with c8ab3a5c1b420a0fea860fed8a4930f92d3b4f07e3125fc6f8f63ba48afd3055 not found: ID does not exist" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.664412 4832 scope.go:117] "RemoveContainer" containerID="c57863473f118db70d8e5439c5f0c55fc1989d85b91d0f076966aa5f4af4c2fe" Oct 02 18:25:27 crc kubenswrapper[4832]: E1002 18:25:27.664795 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c57863473f118db70d8e5439c5f0c55fc1989d85b91d0f076966aa5f4af4c2fe\": container with ID starting with c57863473f118db70d8e5439c5f0c55fc1989d85b91d0f076966aa5f4af4c2fe not found: ID does not exist" containerID="c57863473f118db70d8e5439c5f0c55fc1989d85b91d0f076966aa5f4af4c2fe" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.664820 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c57863473f118db70d8e5439c5f0c55fc1989d85b91d0f076966aa5f4af4c2fe"} err="failed to get container status \"c57863473f118db70d8e5439c5f0c55fc1989d85b91d0f076966aa5f4af4c2fe\": rpc error: code = NotFound desc = could not find container \"c57863473f118db70d8e5439c5f0c55fc1989d85b91d0f076966aa5f4af4c2fe\": container with ID starting with c57863473f118db70d8e5439c5f0c55fc1989d85b91d0f076966aa5f4af4c2fe not found: ID does not exist" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.664835 4832 scope.go:117] "RemoveContainer" containerID="8d3bae71e21fe0d9d133e81461c8246f6fa19325d46656b7090cb6f8f95306ce" Oct 02 18:25:27 crc kubenswrapper[4832]: E1002 18:25:27.665103 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d3bae71e21fe0d9d133e81461c8246f6fa19325d46656b7090cb6f8f95306ce\": container with ID starting with 8d3bae71e21fe0d9d133e81461c8246f6fa19325d46656b7090cb6f8f95306ce not found: ID does not exist" containerID="8d3bae71e21fe0d9d133e81461c8246f6fa19325d46656b7090cb6f8f95306ce" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.665198 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d3bae71e21fe0d9d133e81461c8246f6fa19325d46656b7090cb6f8f95306ce"} err="failed to get container status \"8d3bae71e21fe0d9d133e81461c8246f6fa19325d46656b7090cb6f8f95306ce\": rpc error: code = NotFound desc = could not find container \"8d3bae71e21fe0d9d133e81461c8246f6fa19325d46656b7090cb6f8f95306ce\": container with ID starting with 8d3bae71e21fe0d9d133e81461c8246f6fa19325d46656b7090cb6f8f95306ce not found: ID does not exist" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.665318 4832 scope.go:117] "RemoveContainer" containerID="0568b56757ae8e63f0bed694be35690c26741d0c04f5494ee735a4b4018100a5" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.681146 4832 scope.go:117] "RemoveContainer" containerID="0568b56757ae8e63f0bed694be35690c26741d0c04f5494ee735a4b4018100a5" Oct 02 18:25:27 crc kubenswrapper[4832]: E1002 18:25:27.683753 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0568b56757ae8e63f0bed694be35690c26741d0c04f5494ee735a4b4018100a5\": container with ID starting with 0568b56757ae8e63f0bed694be35690c26741d0c04f5494ee735a4b4018100a5 not found: ID does not exist" containerID="0568b56757ae8e63f0bed694be35690c26741d0c04f5494ee735a4b4018100a5" Oct 02 18:25:27 crc kubenswrapper[4832]: I1002 18:25:27.683794 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0568b56757ae8e63f0bed694be35690c26741d0c04f5494ee735a4b4018100a5"} err="failed to get container status \"0568b56757ae8e63f0bed694be35690c26741d0c04f5494ee735a4b4018100a5\": rpc error: code = NotFound desc = could not find container \"0568b56757ae8e63f0bed694be35690c26741d0c04f5494ee735a4b4018100a5\": container with ID starting with 0568b56757ae8e63f0bed694be35690c26741d0c04f5494ee735a4b4018100a5 not found: ID does not exist" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.448483 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vpnm2" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602273 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5jp2v"] Oct 02 18:25:28 crc kubenswrapper[4832]: E1002 18:25:28.602458 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d532f249-806b-4c9d-936e-7504d83f11ae" containerName="registry-server" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602469 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d532f249-806b-4c9d-936e-7504d83f11ae" containerName="registry-server" Oct 02 18:25:28 crc kubenswrapper[4832]: E1002 18:25:28.602482 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f7511c3-f168-4aa4-ab7a-09e94e1ee900" containerName="extract-content" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602488 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7511c3-f168-4aa4-ab7a-09e94e1ee900" containerName="extract-content" Oct 02 18:25:28 crc kubenswrapper[4832]: E1002 18:25:28.602497 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f7511c3-f168-4aa4-ab7a-09e94e1ee900" containerName="extract-utilities" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602503 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7511c3-f168-4aa4-ab7a-09e94e1ee900" containerName="extract-utilities" Oct 02 18:25:28 crc kubenswrapper[4832]: E1002 18:25:28.602512 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8c743f-e305-47fe-9858-0c0af2a86ea3" containerName="extract-utilities" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602519 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8c743f-e305-47fe-9858-0c0af2a86ea3" containerName="extract-utilities" Oct 02 18:25:28 crc kubenswrapper[4832]: E1002 18:25:28.602527 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8c743f-e305-47fe-9858-0c0af2a86ea3" containerName="extract-content" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602533 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8c743f-e305-47fe-9858-0c0af2a86ea3" containerName="extract-content" Oct 02 18:25:28 crc kubenswrapper[4832]: E1002 18:25:28.602541 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d532f249-806b-4c9d-936e-7504d83f11ae" containerName="extract-utilities" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602547 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d532f249-806b-4c9d-936e-7504d83f11ae" containerName="extract-utilities" Oct 02 18:25:28 crc kubenswrapper[4832]: E1002 18:25:28.602557 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f7511c3-f168-4aa4-ab7a-09e94e1ee900" containerName="registry-server" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602564 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7511c3-f168-4aa4-ab7a-09e94e1ee900" containerName="registry-server" Oct 02 18:25:28 crc kubenswrapper[4832]: E1002 18:25:28.602574 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45cecbc0-ddb6-4cc1-b2d8-892f598086a5" containerName="extract-content" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602581 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="45cecbc0-ddb6-4cc1-b2d8-892f598086a5" containerName="extract-content" Oct 02 18:25:28 crc kubenswrapper[4832]: E1002 18:25:28.602588 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8c743f-e305-47fe-9858-0c0af2a86ea3" containerName="registry-server" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602593 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8c743f-e305-47fe-9858-0c0af2a86ea3" containerName="registry-server" Oct 02 18:25:28 crc kubenswrapper[4832]: E1002 18:25:28.602600 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d532f249-806b-4c9d-936e-7504d83f11ae" containerName="extract-content" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602605 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d532f249-806b-4c9d-936e-7504d83f11ae" containerName="extract-content" Oct 02 18:25:28 crc kubenswrapper[4832]: E1002 18:25:28.602613 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45cecbc0-ddb6-4cc1-b2d8-892f598086a5" containerName="extract-utilities" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602619 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="45cecbc0-ddb6-4cc1-b2d8-892f598086a5" containerName="extract-utilities" Oct 02 18:25:28 crc kubenswrapper[4832]: E1002 18:25:28.602625 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45cecbc0-ddb6-4cc1-b2d8-892f598086a5" containerName="registry-server" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602631 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="45cecbc0-ddb6-4cc1-b2d8-892f598086a5" containerName="registry-server" Oct 02 18:25:28 crc kubenswrapper[4832]: E1002 18:25:28.602638 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1ccc88e-b013-4c52-92b1-6e6462492c3c" containerName="marketplace-operator" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602644 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ccc88e-b013-4c52-92b1-6e6462492c3c" containerName="marketplace-operator" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602723 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f7511c3-f168-4aa4-ab7a-09e94e1ee900" containerName="registry-server" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602736 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8c743f-e305-47fe-9858-0c0af2a86ea3" containerName="registry-server" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602761 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="45cecbc0-ddb6-4cc1-b2d8-892f598086a5" containerName="registry-server" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602784 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ccc88e-b013-4c52-92b1-6e6462492c3c" containerName="marketplace-operator" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.602812 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d532f249-806b-4c9d-936e-7504d83f11ae" containerName="registry-server" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.604017 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.605676 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.615131 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jp2v"] Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.700114 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d465b37-f6e4-48a2-bda2-efc7d3601131-utilities\") pod \"redhat-marketplace-5jp2v\" (UID: \"0d465b37-f6e4-48a2-bda2-efc7d3601131\") " pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.700183 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d465b37-f6e4-48a2-bda2-efc7d3601131-catalog-content\") pod \"redhat-marketplace-5jp2v\" (UID: \"0d465b37-f6e4-48a2-bda2-efc7d3601131\") " pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.700245 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s7hg\" (UniqueName: \"kubernetes.io/projected/0d465b37-f6e4-48a2-bda2-efc7d3601131-kube-api-access-9s7hg\") pod \"redhat-marketplace-5jp2v\" (UID: \"0d465b37-f6e4-48a2-bda2-efc7d3601131\") " pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.801937 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d465b37-f6e4-48a2-bda2-efc7d3601131-utilities\") pod \"redhat-marketplace-5jp2v\" (UID: \"0d465b37-f6e4-48a2-bda2-efc7d3601131\") " pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.802063 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d465b37-f6e4-48a2-bda2-efc7d3601131-catalog-content\") pod \"redhat-marketplace-5jp2v\" (UID: \"0d465b37-f6e4-48a2-bda2-efc7d3601131\") " pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.802195 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s7hg\" (UniqueName: \"kubernetes.io/projected/0d465b37-f6e4-48a2-bda2-efc7d3601131-kube-api-access-9s7hg\") pod \"redhat-marketplace-5jp2v\" (UID: \"0d465b37-f6e4-48a2-bda2-efc7d3601131\") " pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.802544 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d465b37-f6e4-48a2-bda2-efc7d3601131-utilities\") pod \"redhat-marketplace-5jp2v\" (UID: \"0d465b37-f6e4-48a2-bda2-efc7d3601131\") " pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.802766 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d465b37-f6e4-48a2-bda2-efc7d3601131-catalog-content\") pod \"redhat-marketplace-5jp2v\" (UID: \"0d465b37-f6e4-48a2-bda2-efc7d3601131\") " pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.816180 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p6rnh"] Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.821436 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.824674 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.825669 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s7hg\" (UniqueName: \"kubernetes.io/projected/0d465b37-f6e4-48a2-bda2-efc7d3601131-kube-api-access-9s7hg\") pod \"redhat-marketplace-5jp2v\" (UID: \"0d465b37-f6e4-48a2-bda2-efc7d3601131\") " pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.827201 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6rnh"] Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.903318 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bbe1430-3664-4a03-97a8-5302998288ca-catalog-content\") pod \"redhat-operators-p6rnh\" (UID: \"4bbe1430-3664-4a03-97a8-5302998288ca\") " pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.903446 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4tf6\" (UniqueName: \"kubernetes.io/projected/4bbe1430-3664-4a03-97a8-5302998288ca-kube-api-access-r4tf6\") pod \"redhat-operators-p6rnh\" (UID: \"4bbe1430-3664-4a03-97a8-5302998288ca\") " pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.903477 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bbe1430-3664-4a03-97a8-5302998288ca-utilities\") pod \"redhat-operators-p6rnh\" (UID: \"4bbe1430-3664-4a03-97a8-5302998288ca\") " pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:28 crc kubenswrapper[4832]: I1002 18:25:28.918071 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.004329 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4tf6\" (UniqueName: \"kubernetes.io/projected/4bbe1430-3664-4a03-97a8-5302998288ca-kube-api-access-r4tf6\") pod \"redhat-operators-p6rnh\" (UID: \"4bbe1430-3664-4a03-97a8-5302998288ca\") " pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.004373 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bbe1430-3664-4a03-97a8-5302998288ca-utilities\") pod \"redhat-operators-p6rnh\" (UID: \"4bbe1430-3664-4a03-97a8-5302998288ca\") " pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.004399 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bbe1430-3664-4a03-97a8-5302998288ca-catalog-content\") pod \"redhat-operators-p6rnh\" (UID: \"4bbe1430-3664-4a03-97a8-5302998288ca\") " pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.004972 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bbe1430-3664-4a03-97a8-5302998288ca-catalog-content\") pod \"redhat-operators-p6rnh\" (UID: \"4bbe1430-3664-4a03-97a8-5302998288ca\") " pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.005315 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bbe1430-3664-4a03-97a8-5302998288ca-utilities\") pod \"redhat-operators-p6rnh\" (UID: \"4bbe1430-3664-4a03-97a8-5302998288ca\") " pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.023063 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4tf6\" (UniqueName: \"kubernetes.io/projected/4bbe1430-3664-4a03-97a8-5302998288ca-kube-api-access-r4tf6\") pod \"redhat-operators-p6rnh\" (UID: \"4bbe1430-3664-4a03-97a8-5302998288ca\") " pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.159498 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.229135 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45cecbc0-ddb6-4cc1-b2d8-892f598086a5" path="/var/lib/kubelet/pods/45cecbc0-ddb6-4cc1-b2d8-892f598086a5/volumes" Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.230436 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f7511c3-f168-4aa4-ab7a-09e94e1ee900" path="/var/lib/kubelet/pods/8f7511c3-f168-4aa4-ab7a-09e94e1ee900/volumes" Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.231379 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd8c743f-e305-47fe-9858-0c0af2a86ea3" path="/var/lib/kubelet/pods/cd8c743f-e305-47fe-9858-0c0af2a86ea3/volumes" Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.232619 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d532f249-806b-4c9d-936e-7504d83f11ae" path="/var/lib/kubelet/pods/d532f249-806b-4c9d-936e-7504d83f11ae/volumes" Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.233358 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1ccc88e-b013-4c52-92b1-6e6462492c3c" path="/var/lib/kubelet/pods/e1ccc88e-b013-4c52-92b1-6e6462492c3c/volumes" Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.368051 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5jp2v"] Oct 02 18:25:29 crc kubenswrapper[4832]: W1002 18:25:29.372447 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d465b37_f6e4_48a2_bda2_efc7d3601131.slice/crio-0d333d1bad3e55acf2e8cfed0241d6da5cd7e4897a040b80b8c7e0488698ac85 WatchSource:0}: Error finding container 0d333d1bad3e55acf2e8cfed0241d6da5cd7e4897a040b80b8c7e0488698ac85: Status 404 returned error can't find the container with id 0d333d1bad3e55acf2e8cfed0241d6da5cd7e4897a040b80b8c7e0488698ac85 Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.420354 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p6rnh"] Oct 02 18:25:29 crc kubenswrapper[4832]: W1002 18:25:29.444713 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bbe1430_3664_4a03_97a8_5302998288ca.slice/crio-2afd6c5d1601f76d9042fae1b78f7d962c273503716f6b68258411625633fb8e WatchSource:0}: Error finding container 2afd6c5d1601f76d9042fae1b78f7d962c273503716f6b68258411625633fb8e: Status 404 returned error can't find the container with id 2afd6c5d1601f76d9042fae1b78f7d962c273503716f6b68258411625633fb8e Oct 02 18:25:29 crc kubenswrapper[4832]: I1002 18:25:29.450911 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jp2v" event={"ID":"0d465b37-f6e4-48a2-bda2-efc7d3601131","Type":"ContainerStarted","Data":"0d333d1bad3e55acf2e8cfed0241d6da5cd7e4897a040b80b8c7e0488698ac85"} Oct 02 18:25:30 crc kubenswrapper[4832]: I1002 18:25:30.458615 4832 generic.go:334] "Generic (PLEG): container finished" podID="4bbe1430-3664-4a03-97a8-5302998288ca" containerID="8edfdcf1f92067f7ae76ed3732b21ac47aa5db47fcd180286148ef44c3875ada" exitCode=0 Oct 02 18:25:30 crc kubenswrapper[4832]: I1002 18:25:30.458695 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6rnh" event={"ID":"4bbe1430-3664-4a03-97a8-5302998288ca","Type":"ContainerDied","Data":"8edfdcf1f92067f7ae76ed3732b21ac47aa5db47fcd180286148ef44c3875ada"} Oct 02 18:25:30 crc kubenswrapper[4832]: I1002 18:25:30.459001 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6rnh" event={"ID":"4bbe1430-3664-4a03-97a8-5302998288ca","Type":"ContainerStarted","Data":"2afd6c5d1601f76d9042fae1b78f7d962c273503716f6b68258411625633fb8e"} Oct 02 18:25:30 crc kubenswrapper[4832]: I1002 18:25:30.462102 4832 generic.go:334] "Generic (PLEG): container finished" podID="0d465b37-f6e4-48a2-bda2-efc7d3601131" containerID="aa925ab5b481a1a5ea1ceda1e413182ac796b2360f9fee8a607fddcdb16d79ad" exitCode=0 Oct 02 18:25:30 crc kubenswrapper[4832]: I1002 18:25:30.462138 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jp2v" event={"ID":"0d465b37-f6e4-48a2-bda2-efc7d3601131","Type":"ContainerDied","Data":"aa925ab5b481a1a5ea1ceda1e413182ac796b2360f9fee8a607fddcdb16d79ad"} Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.012277 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-whm2b"] Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.013418 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.021926 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.023874 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-whm2b"] Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.034778 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n6q2\" (UniqueName: \"kubernetes.io/projected/0467cc9b-9752-4c8c-bde0-660d88dabfb9-kube-api-access-6n6q2\") pod \"certified-operators-whm2b\" (UID: \"0467cc9b-9752-4c8c-bde0-660d88dabfb9\") " pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.035020 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0467cc9b-9752-4c8c-bde0-660d88dabfb9-catalog-content\") pod \"certified-operators-whm2b\" (UID: \"0467cc9b-9752-4c8c-bde0-660d88dabfb9\") " pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.035184 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0467cc9b-9752-4c8c-bde0-660d88dabfb9-utilities\") pod \"certified-operators-whm2b\" (UID: \"0467cc9b-9752-4c8c-bde0-660d88dabfb9\") " pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.136912 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0467cc9b-9752-4c8c-bde0-660d88dabfb9-utilities\") pod \"certified-operators-whm2b\" (UID: \"0467cc9b-9752-4c8c-bde0-660d88dabfb9\") " pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.137013 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n6q2\" (UniqueName: \"kubernetes.io/projected/0467cc9b-9752-4c8c-bde0-660d88dabfb9-kube-api-access-6n6q2\") pod \"certified-operators-whm2b\" (UID: \"0467cc9b-9752-4c8c-bde0-660d88dabfb9\") " pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.137063 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0467cc9b-9752-4c8c-bde0-660d88dabfb9-catalog-content\") pod \"certified-operators-whm2b\" (UID: \"0467cc9b-9752-4c8c-bde0-660d88dabfb9\") " pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.137589 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0467cc9b-9752-4c8c-bde0-660d88dabfb9-catalog-content\") pod \"certified-operators-whm2b\" (UID: \"0467cc9b-9752-4c8c-bde0-660d88dabfb9\") " pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.137867 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0467cc9b-9752-4c8c-bde0-660d88dabfb9-utilities\") pod \"certified-operators-whm2b\" (UID: \"0467cc9b-9752-4c8c-bde0-660d88dabfb9\") " pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.160491 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n6q2\" (UniqueName: \"kubernetes.io/projected/0467cc9b-9752-4c8c-bde0-660d88dabfb9-kube-api-access-6n6q2\") pod \"certified-operators-whm2b\" (UID: \"0467cc9b-9752-4c8c-bde0-660d88dabfb9\") " pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.208728 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m75rn"] Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.209893 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.212657 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.220415 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m75rn"] Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.238104 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65d8bf7b-df8d-4dba-a578-101604e1b479-utilities\") pod \"community-operators-m75rn\" (UID: \"65d8bf7b-df8d-4dba-a578-101604e1b479\") " pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.238313 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65d8bf7b-df8d-4dba-a578-101604e1b479-catalog-content\") pod \"community-operators-m75rn\" (UID: \"65d8bf7b-df8d-4dba-a578-101604e1b479\") " pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.238451 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwzbl\" (UniqueName: \"kubernetes.io/projected/65d8bf7b-df8d-4dba-a578-101604e1b479-kube-api-access-qwzbl\") pod \"community-operators-m75rn\" (UID: \"65d8bf7b-df8d-4dba-a578-101604e1b479\") " pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.339255 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65d8bf7b-df8d-4dba-a578-101604e1b479-utilities\") pod \"community-operators-m75rn\" (UID: \"65d8bf7b-df8d-4dba-a578-101604e1b479\") " pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.339671 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65d8bf7b-df8d-4dba-a578-101604e1b479-catalog-content\") pod \"community-operators-m75rn\" (UID: \"65d8bf7b-df8d-4dba-a578-101604e1b479\") " pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.339708 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwzbl\" (UniqueName: \"kubernetes.io/projected/65d8bf7b-df8d-4dba-a578-101604e1b479-kube-api-access-qwzbl\") pod \"community-operators-m75rn\" (UID: \"65d8bf7b-df8d-4dba-a578-101604e1b479\") " pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.340163 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65d8bf7b-df8d-4dba-a578-101604e1b479-utilities\") pod \"community-operators-m75rn\" (UID: \"65d8bf7b-df8d-4dba-a578-101604e1b479\") " pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.340176 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65d8bf7b-df8d-4dba-a578-101604e1b479-catalog-content\") pod \"community-operators-m75rn\" (UID: \"65d8bf7b-df8d-4dba-a578-101604e1b479\") " pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.342079 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.368143 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwzbl\" (UniqueName: \"kubernetes.io/projected/65d8bf7b-df8d-4dba-a578-101604e1b479-kube-api-access-qwzbl\") pod \"community-operators-m75rn\" (UID: \"65d8bf7b-df8d-4dba-a578-101604e1b479\") " pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.476257 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jp2v" event={"ID":"0d465b37-f6e4-48a2-bda2-efc7d3601131","Type":"ContainerStarted","Data":"44c1b7cff77dee5bcf0145b27d5c8be511744845d328f1346a910ed732a92816"} Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.541514 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-whm2b"] Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.541563 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:31 crc kubenswrapper[4832]: I1002 18:25:31.922615 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m75rn"] Oct 02 18:25:31 crc kubenswrapper[4832]: W1002 18:25:31.928702 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65d8bf7b_df8d_4dba_a578_101604e1b479.slice/crio-9ab07768e8b05c4c53cde7b2f4e37e18cefc18d45b4af6ad95ae9757080cd6b4 WatchSource:0}: Error finding container 9ab07768e8b05c4c53cde7b2f4e37e18cefc18d45b4af6ad95ae9757080cd6b4: Status 404 returned error can't find the container with id 9ab07768e8b05c4c53cde7b2f4e37e18cefc18d45b4af6ad95ae9757080cd6b4 Oct 02 18:25:32 crc kubenswrapper[4832]: I1002 18:25:32.485455 4832 generic.go:334] "Generic (PLEG): container finished" podID="65d8bf7b-df8d-4dba-a578-101604e1b479" containerID="47043616b87ab0cb3de5ee81c56cff779bdd3ed71b93139a851623c7ff4c6106" exitCode=0 Oct 02 18:25:32 crc kubenswrapper[4832]: I1002 18:25:32.485552 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m75rn" event={"ID":"65d8bf7b-df8d-4dba-a578-101604e1b479","Type":"ContainerDied","Data":"47043616b87ab0cb3de5ee81c56cff779bdd3ed71b93139a851623c7ff4c6106"} Oct 02 18:25:32 crc kubenswrapper[4832]: I1002 18:25:32.485591 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m75rn" event={"ID":"65d8bf7b-df8d-4dba-a578-101604e1b479","Type":"ContainerStarted","Data":"9ab07768e8b05c4c53cde7b2f4e37e18cefc18d45b4af6ad95ae9757080cd6b4"} Oct 02 18:25:32 crc kubenswrapper[4832]: I1002 18:25:32.493538 4832 generic.go:334] "Generic (PLEG): container finished" podID="0d465b37-f6e4-48a2-bda2-efc7d3601131" containerID="44c1b7cff77dee5bcf0145b27d5c8be511744845d328f1346a910ed732a92816" exitCode=0 Oct 02 18:25:32 crc kubenswrapper[4832]: I1002 18:25:32.493679 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jp2v" event={"ID":"0d465b37-f6e4-48a2-bda2-efc7d3601131","Type":"ContainerDied","Data":"44c1b7cff77dee5bcf0145b27d5c8be511744845d328f1346a910ed732a92816"} Oct 02 18:25:32 crc kubenswrapper[4832]: I1002 18:25:32.499478 4832 generic.go:334] "Generic (PLEG): container finished" podID="4bbe1430-3664-4a03-97a8-5302998288ca" containerID="28008db36f59ee14e30bfc2d416f2023edba59d6e846b0ab80f31d3157a25f35" exitCode=0 Oct 02 18:25:32 crc kubenswrapper[4832]: I1002 18:25:32.499616 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6rnh" event={"ID":"4bbe1430-3664-4a03-97a8-5302998288ca","Type":"ContainerDied","Data":"28008db36f59ee14e30bfc2d416f2023edba59d6e846b0ab80f31d3157a25f35"} Oct 02 18:25:32 crc kubenswrapper[4832]: I1002 18:25:32.503452 4832 generic.go:334] "Generic (PLEG): container finished" podID="0467cc9b-9752-4c8c-bde0-660d88dabfb9" containerID="4300fd70d72d2d154718925eaa977a04bb8a49c8f54cc9313dcb7a0b3c2d702a" exitCode=0 Oct 02 18:25:32 crc kubenswrapper[4832]: I1002 18:25:32.503500 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whm2b" event={"ID":"0467cc9b-9752-4c8c-bde0-660d88dabfb9","Type":"ContainerDied","Data":"4300fd70d72d2d154718925eaa977a04bb8a49c8f54cc9313dcb7a0b3c2d702a"} Oct 02 18:25:32 crc kubenswrapper[4832]: I1002 18:25:32.503540 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whm2b" event={"ID":"0467cc9b-9752-4c8c-bde0-660d88dabfb9","Type":"ContainerStarted","Data":"2f44e0995b674c1bd80877bad432e0dffd0c14c0f2b43f9793350d86eeb210c6"} Oct 02 18:25:33 crc kubenswrapper[4832]: I1002 18:25:33.514770 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p6rnh" event={"ID":"4bbe1430-3664-4a03-97a8-5302998288ca","Type":"ContainerStarted","Data":"eb28fb13efa601f1965a54cd86ec2f84ac06e58c7c33c93b8fdb34522d4b38f8"} Oct 02 18:25:33 crc kubenswrapper[4832]: I1002 18:25:33.523593 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5jp2v" event={"ID":"0d465b37-f6e4-48a2-bda2-efc7d3601131","Type":"ContainerStarted","Data":"5b4ab8763735c05a17c7ff3d2792e10d0899008b2985fc577a5c168b3845fdc2"} Oct 02 18:25:33 crc kubenswrapper[4832]: I1002 18:25:33.530744 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whm2b" event={"ID":"0467cc9b-9752-4c8c-bde0-660d88dabfb9","Type":"ContainerStarted","Data":"c8386e2a076b6756cd253bc8fc07d34778b853041a5d7f7875c84fdf15e3204c"} Oct 02 18:25:33 crc kubenswrapper[4832]: I1002 18:25:33.543623 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p6rnh" podStartSLOduration=3.031119847 podStartE2EDuration="5.543605097s" podCreationTimestamp="2025-10-02 18:25:28 +0000 UTC" firstStartedPulling="2025-10-02 18:25:30.462485225 +0000 UTC m=+287.431928097" lastFinishedPulling="2025-10-02 18:25:32.974970475 +0000 UTC m=+289.944413347" observedRunningTime="2025-10-02 18:25:33.540564123 +0000 UTC m=+290.510007005" watchObservedRunningTime="2025-10-02 18:25:33.543605097 +0000 UTC m=+290.513047969" Oct 02 18:25:33 crc kubenswrapper[4832]: I1002 18:25:33.586875 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5jp2v" podStartSLOduration=3.012091852 podStartE2EDuration="5.5868559s" podCreationTimestamp="2025-10-02 18:25:28 +0000 UTC" firstStartedPulling="2025-10-02 18:25:30.464625131 +0000 UTC m=+287.434068003" lastFinishedPulling="2025-10-02 18:25:33.039389179 +0000 UTC m=+290.008832051" observedRunningTime="2025-10-02 18:25:33.586110027 +0000 UTC m=+290.555552919" watchObservedRunningTime="2025-10-02 18:25:33.5868559 +0000 UTC m=+290.556298762" Oct 02 18:25:34 crc kubenswrapper[4832]: I1002 18:25:34.537668 4832 generic.go:334] "Generic (PLEG): container finished" podID="65d8bf7b-df8d-4dba-a578-101604e1b479" containerID="9d9809db0d57c0813b61b98bd4d1d9afb212fb74c915916ceae5f868efd3a0c0" exitCode=0 Oct 02 18:25:34 crc kubenswrapper[4832]: I1002 18:25:34.537790 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m75rn" event={"ID":"65d8bf7b-df8d-4dba-a578-101604e1b479","Type":"ContainerDied","Data":"9d9809db0d57c0813b61b98bd4d1d9afb212fb74c915916ceae5f868efd3a0c0"} Oct 02 18:25:34 crc kubenswrapper[4832]: I1002 18:25:34.542029 4832 generic.go:334] "Generic (PLEG): container finished" podID="0467cc9b-9752-4c8c-bde0-660d88dabfb9" containerID="c8386e2a076b6756cd253bc8fc07d34778b853041a5d7f7875c84fdf15e3204c" exitCode=0 Oct 02 18:25:34 crc kubenswrapper[4832]: I1002 18:25:34.542448 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whm2b" event={"ID":"0467cc9b-9752-4c8c-bde0-660d88dabfb9","Type":"ContainerDied","Data":"c8386e2a076b6756cd253bc8fc07d34778b853041a5d7f7875c84fdf15e3204c"} Oct 02 18:25:35 crc kubenswrapper[4832]: I1002 18:25:35.550068 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-whm2b" event={"ID":"0467cc9b-9752-4c8c-bde0-660d88dabfb9","Type":"ContainerStarted","Data":"a57fafb73d264bcb2e77d9eef1c60cfd9cadcfb5aecf26fdde804d9eee35e832"} Oct 02 18:25:35 crc kubenswrapper[4832]: I1002 18:25:35.568955 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-whm2b" podStartSLOduration=2.884220018 podStartE2EDuration="5.568935001s" podCreationTimestamp="2025-10-02 18:25:30 +0000 UTC" firstStartedPulling="2025-10-02 18:25:32.508791374 +0000 UTC m=+289.478234296" lastFinishedPulling="2025-10-02 18:25:35.193506407 +0000 UTC m=+292.162949279" observedRunningTime="2025-10-02 18:25:35.567723713 +0000 UTC m=+292.537166605" watchObservedRunningTime="2025-10-02 18:25:35.568935001 +0000 UTC m=+292.538377873" Oct 02 18:25:36 crc kubenswrapper[4832]: I1002 18:25:36.566383 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m75rn" event={"ID":"65d8bf7b-df8d-4dba-a578-101604e1b479","Type":"ContainerStarted","Data":"2bac18275e96de60d5b330f227d312c1632cdb26e883146ffb36c025e20f5259"} Oct 02 18:25:36 crc kubenswrapper[4832]: I1002 18:25:36.585020 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m75rn" podStartSLOduration=2.653810752 podStartE2EDuration="5.585000039s" podCreationTimestamp="2025-10-02 18:25:31 +0000 UTC" firstStartedPulling="2025-10-02 18:25:32.487924592 +0000 UTC m=+289.457367494" lastFinishedPulling="2025-10-02 18:25:35.419113909 +0000 UTC m=+292.388556781" observedRunningTime="2025-10-02 18:25:36.582968406 +0000 UTC m=+293.552411278" watchObservedRunningTime="2025-10-02 18:25:36.585000039 +0000 UTC m=+293.554442911" Oct 02 18:25:38 crc kubenswrapper[4832]: I1002 18:25:38.919123 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:38 crc kubenswrapper[4832]: I1002 18:25:38.919516 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:38 crc kubenswrapper[4832]: I1002 18:25:38.974914 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:39 crc kubenswrapper[4832]: I1002 18:25:39.160503 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:39 crc kubenswrapper[4832]: I1002 18:25:39.160556 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:39 crc kubenswrapper[4832]: I1002 18:25:39.209826 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:39 crc kubenswrapper[4832]: I1002 18:25:39.620778 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5jp2v" Oct 02 18:25:39 crc kubenswrapper[4832]: I1002 18:25:39.628085 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p6rnh" Oct 02 18:25:41 crc kubenswrapper[4832]: I1002 18:25:41.342370 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:41 crc kubenswrapper[4832]: I1002 18:25:41.342437 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:41 crc kubenswrapper[4832]: I1002 18:25:41.394396 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:41 crc kubenswrapper[4832]: I1002 18:25:41.542691 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:41 crc kubenswrapper[4832]: I1002 18:25:41.542824 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:41 crc kubenswrapper[4832]: I1002 18:25:41.607151 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:41 crc kubenswrapper[4832]: I1002 18:25:41.649709 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-whm2b" Oct 02 18:25:41 crc kubenswrapper[4832]: I1002 18:25:41.673345 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m75rn" Oct 02 18:25:56 crc kubenswrapper[4832]: I1002 18:25:56.987430 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z"] Oct 02 18:25:56 crc kubenswrapper[4832]: I1002 18:25:56.992025 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z"] Oct 02 18:25:56 crc kubenswrapper[4832]: I1002 18:25:56.992125 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z" Oct 02 18:25:56 crc kubenswrapper[4832]: I1002 18:25:56.995673 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Oct 02 18:25:56 crc kubenswrapper[4832]: I1002 18:25:56.995977 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Oct 02 18:25:56 crc kubenswrapper[4832]: I1002 18:25:56.996444 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Oct 02 18:25:56 crc kubenswrapper[4832]: I1002 18:25:56.996463 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Oct 02 18:25:56 crc kubenswrapper[4832]: I1002 18:25:56.997550 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Oct 02 18:25:57 crc kubenswrapper[4832]: I1002 18:25:57.123625 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vstpk\" (UniqueName: \"kubernetes.io/projected/36003767-4cec-424e-adb2-9ba9338d5788-kube-api-access-vstpk\") pod \"cluster-monitoring-operator-6d5b84845-tth7z\" (UID: \"36003767-4cec-424e-adb2-9ba9338d5788\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z" Oct 02 18:25:57 crc kubenswrapper[4832]: I1002 18:25:57.123782 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/36003767-4cec-424e-adb2-9ba9338d5788-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-tth7z\" (UID: \"36003767-4cec-424e-adb2-9ba9338d5788\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z" Oct 02 18:25:57 crc kubenswrapper[4832]: I1002 18:25:57.123906 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/36003767-4cec-424e-adb2-9ba9338d5788-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-tth7z\" (UID: \"36003767-4cec-424e-adb2-9ba9338d5788\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z" Oct 02 18:25:57 crc kubenswrapper[4832]: I1002 18:25:57.224403 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/36003767-4cec-424e-adb2-9ba9338d5788-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-tth7z\" (UID: \"36003767-4cec-424e-adb2-9ba9338d5788\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z" Oct 02 18:25:57 crc kubenswrapper[4832]: I1002 18:25:57.225692 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vstpk\" (UniqueName: \"kubernetes.io/projected/36003767-4cec-424e-adb2-9ba9338d5788-kube-api-access-vstpk\") pod \"cluster-monitoring-operator-6d5b84845-tth7z\" (UID: \"36003767-4cec-424e-adb2-9ba9338d5788\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z" Oct 02 18:25:57 crc kubenswrapper[4832]: I1002 18:25:57.225757 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/36003767-4cec-424e-adb2-9ba9338d5788-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-tth7z\" (UID: \"36003767-4cec-424e-adb2-9ba9338d5788\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z" Oct 02 18:25:57 crc kubenswrapper[4832]: I1002 18:25:57.227718 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/36003767-4cec-424e-adb2-9ba9338d5788-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-tth7z\" (UID: \"36003767-4cec-424e-adb2-9ba9338d5788\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z" Oct 02 18:25:57 crc kubenswrapper[4832]: I1002 18:25:57.236557 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/36003767-4cec-424e-adb2-9ba9338d5788-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-tth7z\" (UID: \"36003767-4cec-424e-adb2-9ba9338d5788\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z" Oct 02 18:25:57 crc kubenswrapper[4832]: I1002 18:25:57.256778 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vstpk\" (UniqueName: \"kubernetes.io/projected/36003767-4cec-424e-adb2-9ba9338d5788-kube-api-access-vstpk\") pod \"cluster-monitoring-operator-6d5b84845-tth7z\" (UID: \"36003767-4cec-424e-adb2-9ba9338d5788\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z" Oct 02 18:25:57 crc kubenswrapper[4832]: I1002 18:25:57.318140 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z" Oct 02 18:25:57 crc kubenswrapper[4832]: I1002 18:25:57.595287 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z"] Oct 02 18:25:57 crc kubenswrapper[4832]: W1002 18:25:57.606470 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36003767_4cec_424e_adb2_9ba9338d5788.slice/crio-e7dceda2514742389086346a0447071d450df5a4653e8ccac1f786cdac7c8220 WatchSource:0}: Error finding container e7dceda2514742389086346a0447071d450df5a4653e8ccac1f786cdac7c8220: Status 404 returned error can't find the container with id e7dceda2514742389086346a0447071d450df5a4653e8ccac1f786cdac7c8220 Oct 02 18:25:57 crc kubenswrapper[4832]: I1002 18:25:57.697754 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z" event={"ID":"36003767-4cec-424e-adb2-9ba9338d5788","Type":"ContainerStarted","Data":"e7dceda2514742389086346a0447071d450df5a4653e8ccac1f786cdac7c8220"} Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.321476 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-926hz"] Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.323374 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.336552 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-926hz"] Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.434540 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6vcpz"] Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.435382 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6vcpz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.437816 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-8s7hl" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.438036 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.450725 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6vcpz"] Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.495670 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e83d3f9e-04fe-4199-9c57-6383dd96c27e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.495721 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e83d3f9e-04fe-4199-9c57-6383dd96c27e-bound-sa-token\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.495903 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e83d3f9e-04fe-4199-9c57-6383dd96c27e-trusted-ca\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.495960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e83d3f9e-04fe-4199-9c57-6383dd96c27e-registry-certificates\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.496176 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e83d3f9e-04fe-4199-9c57-6383dd96c27e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.496319 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e83d3f9e-04fe-4199-9c57-6383dd96c27e-registry-tls\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.496355 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcnf8\" (UniqueName: \"kubernetes.io/projected/e83d3f9e-04fe-4199-9c57-6383dd96c27e-kube-api-access-gcnf8\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.496418 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.522759 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.597850 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e83d3f9e-04fe-4199-9c57-6383dd96c27e-bound-sa-token\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.597924 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e83d3f9e-04fe-4199-9c57-6383dd96c27e-trusted-ca\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.597949 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e83d3f9e-04fe-4199-9c57-6383dd96c27e-registry-certificates\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.597985 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e83d3f9e-04fe-4199-9c57-6383dd96c27e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.598016 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e83d3f9e-04fe-4199-9c57-6383dd96c27e-registry-tls\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.598036 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/79f7276a-0716-48f0-bd4a-a1b82663939c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-6vcpz\" (UID: \"79f7276a-0716-48f0-bd4a-a1b82663939c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6vcpz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.598057 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcnf8\" (UniqueName: \"kubernetes.io/projected/e83d3f9e-04fe-4199-9c57-6383dd96c27e-kube-api-access-gcnf8\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.598089 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e83d3f9e-04fe-4199-9c57-6383dd96c27e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.598850 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e83d3f9e-04fe-4199-9c57-6383dd96c27e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.599300 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e83d3f9e-04fe-4199-9c57-6383dd96c27e-trusted-ca\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.599962 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e83d3f9e-04fe-4199-9c57-6383dd96c27e-registry-certificates\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.604311 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e83d3f9e-04fe-4199-9c57-6383dd96c27e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.614752 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e83d3f9e-04fe-4199-9c57-6383dd96c27e-registry-tls\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.615211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcnf8\" (UniqueName: \"kubernetes.io/projected/e83d3f9e-04fe-4199-9c57-6383dd96c27e-kube-api-access-gcnf8\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.615996 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e83d3f9e-04fe-4199-9c57-6383dd96c27e-bound-sa-token\") pod \"image-registry-66df7c8f76-926hz\" (UID: \"e83d3f9e-04fe-4199-9c57-6383dd96c27e\") " pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.638099 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.698937 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/79f7276a-0716-48f0-bd4a-a1b82663939c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-6vcpz\" (UID: \"79f7276a-0716-48f0-bd4a-a1b82663939c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6vcpz" Oct 02 18:26:01 crc kubenswrapper[4832]: E1002 18:26:01.699175 4832 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Oct 02 18:26:01 crc kubenswrapper[4832]: E1002 18:26:01.699579 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79f7276a-0716-48f0-bd4a-a1b82663939c-tls-certificates podName:79f7276a-0716-48f0-bd4a-a1b82663939c nodeName:}" failed. No retries permitted until 2025-10-02 18:26:02.199552461 +0000 UTC m=+319.168995333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/79f7276a-0716-48f0-bd4a-a1b82663939c-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-6vcpz" (UID: "79f7276a-0716-48f0-bd4a-a1b82663939c") : secret "prometheus-operator-admission-webhook-tls" not found Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.738014 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z" event={"ID":"36003767-4cec-424e-adb2-9ba9338d5788","Type":"ContainerStarted","Data":"a1feec7a7aa8d8b697cd3e6028fdf732bafc78b27ce01725447c3d56b5d6d8de"} Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.758587 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-tth7z" podStartSLOduration=2.647290322 podStartE2EDuration="5.758562826s" podCreationTimestamp="2025-10-02 18:25:56 +0000 UTC" firstStartedPulling="2025-10-02 18:25:57.609431803 +0000 UTC m=+314.578874715" lastFinishedPulling="2025-10-02 18:26:00.720704307 +0000 UTC m=+317.690147219" observedRunningTime="2025-10-02 18:26:01.757646768 +0000 UTC m=+318.727089640" watchObservedRunningTime="2025-10-02 18:26:01.758562826 +0000 UTC m=+318.728005708" Oct 02 18:26:01 crc kubenswrapper[4832]: I1002 18:26:01.862735 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-926hz"] Oct 02 18:26:01 crc kubenswrapper[4832]: W1002 18:26:01.873653 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode83d3f9e_04fe_4199_9c57_6383dd96c27e.slice/crio-e4696062cc3e832e1f97c60cfd20ce7b7e95162781ed739f9cdbf215f0d6370b WatchSource:0}: Error finding container e4696062cc3e832e1f97c60cfd20ce7b7e95162781ed739f9cdbf215f0d6370b: Status 404 returned error can't find the container with id e4696062cc3e832e1f97c60cfd20ce7b7e95162781ed739f9cdbf215f0d6370b Oct 02 18:26:02 crc kubenswrapper[4832]: I1002 18:26:02.206970 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/79f7276a-0716-48f0-bd4a-a1b82663939c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-6vcpz\" (UID: \"79f7276a-0716-48f0-bd4a-a1b82663939c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6vcpz" Oct 02 18:26:02 crc kubenswrapper[4832]: I1002 18:26:02.215137 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/79f7276a-0716-48f0-bd4a-a1b82663939c-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-6vcpz\" (UID: \"79f7276a-0716-48f0-bd4a-a1b82663939c\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6vcpz" Oct 02 18:26:02 crc kubenswrapper[4832]: I1002 18:26:02.350251 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6vcpz" Oct 02 18:26:02 crc kubenswrapper[4832]: I1002 18:26:02.573435 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6vcpz"] Oct 02 18:26:02 crc kubenswrapper[4832]: W1002 18:26:02.580006 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79f7276a_0716_48f0_bd4a_a1b82663939c.slice/crio-3097b21e0efa439de3a075e785bcfd630cc3ee5f557a3c4dc160682678ebe47e WatchSource:0}: Error finding container 3097b21e0efa439de3a075e785bcfd630cc3ee5f557a3c4dc160682678ebe47e: Status 404 returned error can't find the container with id 3097b21e0efa439de3a075e785bcfd630cc3ee5f557a3c4dc160682678ebe47e Oct 02 18:26:02 crc kubenswrapper[4832]: I1002 18:26:02.745491 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6vcpz" event={"ID":"79f7276a-0716-48f0-bd4a-a1b82663939c","Type":"ContainerStarted","Data":"3097b21e0efa439de3a075e785bcfd630cc3ee5f557a3c4dc160682678ebe47e"} Oct 02 18:26:02 crc kubenswrapper[4832]: I1002 18:26:02.747529 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-926hz" event={"ID":"e83d3f9e-04fe-4199-9c57-6383dd96c27e","Type":"ContainerStarted","Data":"7b4b30f30d321f25ed19e30dc05ac05442ce1ec741f906fbe4be4e5657b66718"} Oct 02 18:26:02 crc kubenswrapper[4832]: I1002 18:26:02.747594 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-926hz" event={"ID":"e83d3f9e-04fe-4199-9c57-6383dd96c27e","Type":"ContainerStarted","Data":"e4696062cc3e832e1f97c60cfd20ce7b7e95162781ed739f9cdbf215f0d6370b"} Oct 02 18:26:02 crc kubenswrapper[4832]: I1002 18:26:02.776875 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-926hz" podStartSLOduration=1.776855203 podStartE2EDuration="1.776855203s" podCreationTimestamp="2025-10-02 18:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:26:02.77675357 +0000 UTC m=+319.746196482" watchObservedRunningTime="2025-10-02 18:26:02.776855203 +0000 UTC m=+319.746298075" Oct 02 18:26:03 crc kubenswrapper[4832]: I1002 18:26:03.764784 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:04 crc kubenswrapper[4832]: I1002 18:26:04.773161 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6vcpz" event={"ID":"79f7276a-0716-48f0-bd4a-a1b82663939c","Type":"ContainerStarted","Data":"d7048312dded15c4eb814508be81294361fa2d78e0ef1ae3b583c5cba3fda4b9"} Oct 02 18:26:04 crc kubenswrapper[4832]: I1002 18:26:04.773692 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6vcpz" Oct 02 18:26:04 crc kubenswrapper[4832]: I1002 18:26:04.781357 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6vcpz" Oct 02 18:26:04 crc kubenswrapper[4832]: I1002 18:26:04.800256 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-6vcpz" podStartSLOduration=2.230189044 podStartE2EDuration="3.800226286s" podCreationTimestamp="2025-10-02 18:26:01 +0000 UTC" firstStartedPulling="2025-10-02 18:26:02.582275502 +0000 UTC m=+319.551718384" lastFinishedPulling="2025-10-02 18:26:04.152312754 +0000 UTC m=+321.121755626" observedRunningTime="2025-10-02 18:26:04.794256359 +0000 UTC m=+321.763699241" watchObservedRunningTime="2025-10-02 18:26:04.800226286 +0000 UTC m=+321.769669208" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.492706 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-zkj65"] Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.493778 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.496870 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-nqx6b" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.497408 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.497760 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.508368 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.512909 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-zkj65"] Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.564982 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dc7d2286-49f9-4027-af68-22f0fa8f0c2b-metrics-client-ca\") pod \"prometheus-operator-db54df47d-zkj65\" (UID: \"dc7d2286-49f9-4027-af68-22f0fa8f0c2b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.565071 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/dc7d2286-49f9-4027-af68-22f0fa8f0c2b-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-zkj65\" (UID: \"dc7d2286-49f9-4027-af68-22f0fa8f0c2b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.565296 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc7d2286-49f9-4027-af68-22f0fa8f0c2b-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-zkj65\" (UID: \"dc7d2286-49f9-4027-af68-22f0fa8f0c2b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.565463 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cscq9\" (UniqueName: \"kubernetes.io/projected/dc7d2286-49f9-4027-af68-22f0fa8f0c2b-kube-api-access-cscq9\") pod \"prometheus-operator-db54df47d-zkj65\" (UID: \"dc7d2286-49f9-4027-af68-22f0fa8f0c2b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.666429 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dc7d2286-49f9-4027-af68-22f0fa8f0c2b-metrics-client-ca\") pod \"prometheus-operator-db54df47d-zkj65\" (UID: \"dc7d2286-49f9-4027-af68-22f0fa8f0c2b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.666499 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/dc7d2286-49f9-4027-af68-22f0fa8f0c2b-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-zkj65\" (UID: \"dc7d2286-49f9-4027-af68-22f0fa8f0c2b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.666589 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc7d2286-49f9-4027-af68-22f0fa8f0c2b-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-zkj65\" (UID: \"dc7d2286-49f9-4027-af68-22f0fa8f0c2b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.666637 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cscq9\" (UniqueName: \"kubernetes.io/projected/dc7d2286-49f9-4027-af68-22f0fa8f0c2b-kube-api-access-cscq9\") pod \"prometheus-operator-db54df47d-zkj65\" (UID: \"dc7d2286-49f9-4027-af68-22f0fa8f0c2b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.667759 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dc7d2286-49f9-4027-af68-22f0fa8f0c2b-metrics-client-ca\") pod \"prometheus-operator-db54df47d-zkj65\" (UID: \"dc7d2286-49f9-4027-af68-22f0fa8f0c2b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.680173 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/dc7d2286-49f9-4027-af68-22f0fa8f0c2b-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-zkj65\" (UID: \"dc7d2286-49f9-4027-af68-22f0fa8f0c2b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.680211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/dc7d2286-49f9-4027-af68-22f0fa8f0c2b-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-zkj65\" (UID: \"dc7d2286-49f9-4027-af68-22f0fa8f0c2b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.688126 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cscq9\" (UniqueName: \"kubernetes.io/projected/dc7d2286-49f9-4027-af68-22f0fa8f0c2b-kube-api-access-cscq9\") pod \"prometheus-operator-db54df47d-zkj65\" (UID: \"dc7d2286-49f9-4027-af68-22f0fa8f0c2b\") " pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" Oct 02 18:26:05 crc kubenswrapper[4832]: I1002 18:26:05.812061 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" Oct 02 18:26:06 crc kubenswrapper[4832]: I1002 18:26:06.090146 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-zkj65"] Oct 02 18:26:06 crc kubenswrapper[4832]: W1002 18:26:06.098901 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc7d2286_49f9_4027_af68_22f0fa8f0c2b.slice/crio-1816356d80c447143552c9dab2474bedced1b71af8d544d300b7c2ff6e6aa164 WatchSource:0}: Error finding container 1816356d80c447143552c9dab2474bedced1b71af8d544d300b7c2ff6e6aa164: Status 404 returned error can't find the container with id 1816356d80c447143552c9dab2474bedced1b71af8d544d300b7c2ff6e6aa164 Oct 02 18:26:06 crc kubenswrapper[4832]: I1002 18:26:06.789883 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" event={"ID":"dc7d2286-49f9-4027-af68-22f0fa8f0c2b","Type":"ContainerStarted","Data":"1816356d80c447143552c9dab2474bedced1b71af8d544d300b7c2ff6e6aa164"} Oct 02 18:26:08 crc kubenswrapper[4832]: I1002 18:26:08.809682 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" event={"ID":"dc7d2286-49f9-4027-af68-22f0fa8f0c2b","Type":"ContainerStarted","Data":"2e9fcdc68462af39bfd87e3a626433c39d738032dc02d61fda6484404d677a5a"} Oct 02 18:26:08 crc kubenswrapper[4832]: I1002 18:26:08.810225 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" event={"ID":"dc7d2286-49f9-4027-af68-22f0fa8f0c2b","Type":"ContainerStarted","Data":"156b0e5b997666a3dc3e5091b3a415beb8c7cf36a6dbfd110d5c65b30dc18529"} Oct 02 18:26:08 crc kubenswrapper[4832]: I1002 18:26:08.837040 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-zkj65" podStartSLOduration=2.248149777 podStartE2EDuration="3.83701227s" podCreationTimestamp="2025-10-02 18:26:05 +0000 UTC" firstStartedPulling="2025-10-02 18:26:06.10190366 +0000 UTC m=+323.071346552" lastFinishedPulling="2025-10-02 18:26:07.690766133 +0000 UTC m=+324.660209045" observedRunningTime="2025-10-02 18:26:08.83198107 +0000 UTC m=+325.801423932" watchObservedRunningTime="2025-10-02 18:26:08.83701227 +0000 UTC m=+325.806455172" Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.896630 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-2vvch"] Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.900960 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.912172 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.912332 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.912581 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-q8w6h" Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.922074 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-2vvch"] Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.926329 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq"] Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.933754 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.967789 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-2vvch\" (UID: \"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.967863 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pb5s\" (UniqueName: \"kubernetes.io/projected/035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32-kube-api-access-7pb5s\") pod \"openshift-state-metrics-566fddb674-2vvch\" (UID: \"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.967890 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-2vvch\" (UID: \"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.967936 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-2vvch\" (UID: \"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.976301 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.982510 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.982608 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.982672 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-n44q7" Oct 02 18:26:10 crc kubenswrapper[4832]: I1002 18:26:10.988459 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq"] Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.068896 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-2vvch\" (UID: \"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.068953 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pb5s\" (UniqueName: \"kubernetes.io/projected/035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32-kube-api-access-7pb5s\") pod \"openshift-state-metrics-566fddb674-2vvch\" (UID: \"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.068979 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-2vvch\" (UID: \"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.069010 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c0a9d86f-2f91-4412-b05f-bf6a31367dac-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.069031 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c0a9d86f-2f91-4412-b05f-bf6a31367dac-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.069048 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f68tn\" (UniqueName: \"kubernetes.io/projected/c0a9d86f-2f91-4412-b05f-bf6a31367dac-kube-api-access-f68tn\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.069073 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c0a9d86f-2f91-4412-b05f-bf6a31367dac-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.069097 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-2vvch\" (UID: \"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.069336 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0a9d86f-2f91-4412-b05f-bf6a31367dac-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.069432 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0a9d86f-2f91-4412-b05f-bf6a31367dac-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.069825 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-2vvch\" (UID: \"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.074501 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-2vvch\" (UID: \"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.083216 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-2vvch\" (UID: \"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.086897 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-mnqzs"] Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.087927 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.094908 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.095289 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.095526 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-7tqfp" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.113191 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pb5s\" (UniqueName: \"kubernetes.io/projected/035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32-kube-api-access-7pb5s\") pod \"openshift-state-metrics-566fddb674-2vvch\" (UID: \"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.170926 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c0a9d86f-2f91-4412-b05f-bf6a31367dac-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.170981 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c0a9d86f-2f91-4412-b05f-bf6a31367dac-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.171006 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f68tn\" (UniqueName: \"kubernetes.io/projected/c0a9d86f-2f91-4412-b05f-bf6a31367dac-kube-api-access-f68tn\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.171040 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c0a9d86f-2f91-4412-b05f-bf6a31367dac-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.171074 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-root\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.171102 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.171127 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-wtmp\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.171158 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-sys\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.171175 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2d2c\" (UniqueName: \"kubernetes.io/projected/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-kube-api-access-z2d2c\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.171216 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0a9d86f-2f91-4412-b05f-bf6a31367dac-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.171248 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0a9d86f-2f91-4412-b05f-bf6a31367dac-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.171299 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-textfile\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.171327 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-metrics-client-ca\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.171362 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-tls\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.172323 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/c0a9d86f-2f91-4412-b05f-bf6a31367dac-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.172879 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c0a9d86f-2f91-4412-b05f-bf6a31367dac-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.172905 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/c0a9d86f-2f91-4412-b05f-bf6a31367dac-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.176017 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/c0a9d86f-2f91-4412-b05f-bf6a31367dac-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.193075 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f68tn\" (UniqueName: \"kubernetes.io/projected/c0a9d86f-2f91-4412-b05f-bf6a31367dac-kube-api-access-f68tn\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.195289 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c0a9d86f-2f91-4412-b05f-bf6a31367dac-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-98shq\" (UID: \"c0a9d86f-2f91-4412-b05f-bf6a31367dac\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.272319 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-root\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.272376 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.272398 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-wtmp\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.272423 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-sys\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.272427 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-root\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.272437 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2d2c\" (UniqueName: \"kubernetes.io/projected/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-kube-api-access-z2d2c\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.272622 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-sys\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.272648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-textfile\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.272773 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-metrics-client-ca\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.272900 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-tls\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: E1002 18:26:11.273169 4832 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.273228 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-textfile\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: E1002 18:26:11.273253 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-tls podName:144c7fb3-5112-46d8-acfd-ec9afe4ebf07 nodeName:}" failed. No retries permitted until 2025-10-02 18:26:11.77322527 +0000 UTC m=+328.742668142 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-tls") pod "node-exporter-mnqzs" (UID: "144c7fb3-5112-46d8-acfd-ec9afe4ebf07") : secret "node-exporter-tls" not found Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.273561 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-metrics-client-ca\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.272802 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-wtmp\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.277706 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.278378 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.289426 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.293705 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2d2c\" (UniqueName: \"kubernetes.io/projected/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-kube-api-access-z2d2c\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.526437 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq"] Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.573090 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-2vvch"] Oct 02 18:26:11 crc kubenswrapper[4832]: W1002 18:26:11.588205 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod035cc1a8_2ce7_4cd8_adaf_53a5ab54ee32.slice/crio-68803b0752883fe6890f787011f1770112423e3dbc42e605de9f54715c260971 WatchSource:0}: Error finding container 68803b0752883fe6890f787011f1770112423e3dbc42e605de9f54715c260971: Status 404 returned error can't find the container with id 68803b0752883fe6890f787011f1770112423e3dbc42e605de9f54715c260971 Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.780502 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-tls\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.787556 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/144c7fb3-5112-46d8-acfd-ec9afe4ebf07-node-exporter-tls\") pod \"node-exporter-mnqzs\" (UID: \"144c7fb3-5112-46d8-acfd-ec9afe4ebf07\") " pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.834013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" event={"ID":"c0a9d86f-2f91-4412-b05f-bf6a31367dac","Type":"ContainerStarted","Data":"6fbb89a544cb42128c35befaae1b01ce13e35ffecddc85c88da550468c7f86ad"} Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.835663 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" event={"ID":"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32","Type":"ContainerStarted","Data":"5dece6b3004b6ad595f4b53cd44ee8c50da87fe0edd7d65bcdace094a1b595a4"} Oct 02 18:26:11 crc kubenswrapper[4832]: I1002 18:26:11.835693 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" event={"ID":"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32","Type":"ContainerStarted","Data":"68803b0752883fe6890f787011f1770112423e3dbc42e605de9f54715c260971"} Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.055333 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-mnqzs" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.093315 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.095521 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.097968 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.098234 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.098563 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.098751 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.099003 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-hjpqb" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.099189 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.099213 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.107169 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.110479 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.117531 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.185560 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-web-config\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.185606 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.185642 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-config-volume\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.185667 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.185879 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-tls-assets\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.185970 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwkhq\" (UniqueName: \"kubernetes.io/projected/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-kube-api-access-xwkhq\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.186014 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.186040 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.186077 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.186098 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.186241 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.186406 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-config-out\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.287201 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.287245 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.287308 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.287339 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-config-out\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.287364 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-web-config\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.287385 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.287408 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-config-volume\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.287429 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.287473 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-tls-assets\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.287492 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwkhq\" (UniqueName: \"kubernetes.io/projected/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-kube-api-access-xwkhq\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.287531 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.287551 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.289471 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.289860 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.290230 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.293397 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-config-volume\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.293537 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.293652 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-web-config\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.294047 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.296582 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-tls-assets\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.298026 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.298595 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-config-out\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.298780 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.312226 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwkhq\" (UniqueName: \"kubernetes.io/projected/a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d-kube-api-access-xwkhq\") pod \"alertmanager-main-0\" (UID: \"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d\") " pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.476169 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.840733 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mnqzs" event={"ID":"144c7fb3-5112-46d8-acfd-ec9afe4ebf07","Type":"ContainerStarted","Data":"4c90078efe8a83388cd7b8ee6bb8a06ad31c741a112b917e8d9baddb2187ee8f"} Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.842771 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" event={"ID":"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32","Type":"ContainerStarted","Data":"cac27a9a72d5b9d4e0373fd85c06865136fdf55c0e14e9df78f6ae2bd43d6443"} Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.897807 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.960557 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2"] Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.962862 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.964896 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.965197 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-3kibnuk7lh98s" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.965696 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.966575 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.966617 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.967490 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-wc97k" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.967572 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Oct 02 18:26:12 crc kubenswrapper[4832]: I1002 18:26:12.974763 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2"] Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.101920 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.101986 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.102028 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-grpc-tls\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.102048 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9hg7\" (UniqueName: \"kubernetes.io/projected/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-kube-api-access-t9hg7\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.102092 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.102112 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.102127 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-metrics-client-ca\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.102160 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-tls\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.203852 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.204040 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.204100 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9hg7\" (UniqueName: \"kubernetes.io/projected/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-kube-api-access-t9hg7\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.204128 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-grpc-tls\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.204167 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.204198 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.204222 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-metrics-client-ca\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.204249 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-tls\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.208367 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-metrics-client-ca\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.210039 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.210815 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.211820 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-grpc-tls\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.212768 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.215390 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-tls\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.216581 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.227607 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9hg7\" (UniqueName: \"kubernetes.io/projected/583fa0c5-0450-444a-bc3c-2fcf7ec6838b-kube-api-access-t9hg7\") pod \"thanos-querier-6b85fb77d7-5bjn2\" (UID: \"583fa0c5-0450-444a-bc3c-2fcf7ec6838b\") " pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:13 crc kubenswrapper[4832]: I1002 18:26:13.289876 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:14 crc kubenswrapper[4832]: I1002 18:26:14.860295 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d","Type":"ContainerStarted","Data":"ec1e4718fb5f2c4deebef33db8cff06243ea106f457837a57856ae6a01fc5d53"} Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.752747 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-595bf5d57c-vnfx5"] Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.754412 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.764255 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-595bf5d57c-vnfx5"] Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.861689 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm4w7\" (UniqueName: \"kubernetes.io/projected/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-kube-api-access-mm4w7\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.862155 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-service-ca\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.862325 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-config\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.862380 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-oauth-config\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.862429 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-trusted-ca-bundle\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.862469 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-oauth-serving-cert\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.862496 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-serving-cert\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.963972 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm4w7\" (UniqueName: \"kubernetes.io/projected/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-kube-api-access-mm4w7\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.964016 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-service-ca\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.964049 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-config\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.964083 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-oauth-config\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.964116 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-trusted-ca-bundle\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.964135 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-oauth-serving-cert\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.964149 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-serving-cert\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.965875 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-service-ca\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.965924 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-config\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.966496 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-oauth-serving-cert\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.966669 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-trusted-ca-bundle\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.972714 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-serving-cert\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.976899 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-oauth-config\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.980614 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm4w7\" (UniqueName: \"kubernetes.io/projected/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-kube-api-access-mm4w7\") pod \"console-595bf5d57c-vnfx5\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:15 crc kubenswrapper[4832]: I1002 18:26:15.993153 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2"] Oct 02 18:26:16 crc kubenswrapper[4832]: W1002 18:26:16.003832 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod583fa0c5_0450_444a_bc3c_2fcf7ec6838b.slice/crio-271d81b8e69b9ac1d917c407fb48d7c515472a8fca84d490aa463d0f925f7bb5 WatchSource:0}: Error finding container 271d81b8e69b9ac1d917c407fb48d7c515472a8fca84d490aa463d0f925f7bb5: Status 404 returned error can't find the container with id 271d81b8e69b9ac1d917c407fb48d7c515472a8fca84d490aa463d0f925f7bb5 Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.149567 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.343245 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-99c5fbc7f-td55z"] Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.344669 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.347151 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.348479 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.348609 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-9bf16evhtu3b6" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.348785 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-mwtg2" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.348907 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.349030 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.354782 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-99c5fbc7f-td55z"] Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.380655 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-595bf5d57c-vnfx5"] Oct 02 18:26:16 crc kubenswrapper[4832]: W1002 18:26:16.384791 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb684e9c6_f9cf_4b5c_914a_3cb52a34ac9f.slice/crio-275c40756a609cc95a2ca87328e7ab5d0ef8527f87c48b90d0079bce0be667a6 WatchSource:0}: Error finding container 275c40756a609cc95a2ca87328e7ab5d0ef8527f87c48b90d0079bce0be667a6: Status 404 returned error can't find the container with id 275c40756a609cc95a2ca87328e7ab5d0ef8527f87c48b90d0079bce0be667a6 Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.470518 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/0be7aba4-6ccc-4274-9341-ee62e52d8f81-metrics-server-audit-profiles\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.470566 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/0be7aba4-6ccc-4274-9341-ee62e52d8f81-secret-metrics-client-certs\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.470607 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/0be7aba4-6ccc-4274-9341-ee62e52d8f81-audit-log\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.470630 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0be7aba4-6ccc-4274-9341-ee62e52d8f81-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.470654 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tkrc\" (UniqueName: \"kubernetes.io/projected/0be7aba4-6ccc-4274-9341-ee62e52d8f81-kube-api-access-4tkrc\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.470678 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/0be7aba4-6ccc-4274-9341-ee62e52d8f81-secret-metrics-server-tls\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.470721 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be7aba4-6ccc-4274-9341-ee62e52d8f81-client-ca-bundle\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.572763 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/0be7aba4-6ccc-4274-9341-ee62e52d8f81-secret-metrics-server-tls\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.572900 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be7aba4-6ccc-4274-9341-ee62e52d8f81-client-ca-bundle\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.572973 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/0be7aba4-6ccc-4274-9341-ee62e52d8f81-metrics-server-audit-profiles\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.573011 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/0be7aba4-6ccc-4274-9341-ee62e52d8f81-secret-metrics-client-certs\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.573078 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/0be7aba4-6ccc-4274-9341-ee62e52d8f81-audit-log\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.573118 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0be7aba4-6ccc-4274-9341-ee62e52d8f81-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.573162 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tkrc\" (UniqueName: \"kubernetes.io/projected/0be7aba4-6ccc-4274-9341-ee62e52d8f81-kube-api-access-4tkrc\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.574765 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/0be7aba4-6ccc-4274-9341-ee62e52d8f81-audit-log\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.576072 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0be7aba4-6ccc-4274-9341-ee62e52d8f81-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.577953 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/0be7aba4-6ccc-4274-9341-ee62e52d8f81-metrics-server-audit-profiles\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.582614 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/0be7aba4-6ccc-4274-9341-ee62e52d8f81-secret-metrics-client-certs\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.582613 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0be7aba4-6ccc-4274-9341-ee62e52d8f81-client-ca-bundle\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.591458 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/0be7aba4-6ccc-4274-9341-ee62e52d8f81-secret-metrics-server-tls\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.595486 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tkrc\" (UniqueName: \"kubernetes.io/projected/0be7aba4-6ccc-4274-9341-ee62e52d8f81-kube-api-access-4tkrc\") pod \"metrics-server-99c5fbc7f-td55z\" (UID: \"0be7aba4-6ccc-4274-9341-ee62e52d8f81\") " pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.670628 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.715424 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-675c5cf59d-zj8c2"] Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.716846 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-675c5cf59d-zj8c2" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.718814 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.719074 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.720347 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-675c5cf59d-zj8c2"] Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.775864 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/cf5051f9-9950-4d6f-b3fd-e2abb35de11c-monitoring-plugin-cert\") pod \"monitoring-plugin-675c5cf59d-zj8c2\" (UID: \"cf5051f9-9950-4d6f-b3fd-e2abb35de11c\") " pod="openshift-monitoring/monitoring-plugin-675c5cf59d-zj8c2" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.876994 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/cf5051f9-9950-4d6f-b3fd-e2abb35de11c-monitoring-plugin-cert\") pod \"monitoring-plugin-675c5cf59d-zj8c2\" (UID: \"cf5051f9-9950-4d6f-b3fd-e2abb35de11c\") " pod="openshift-monitoring/monitoring-plugin-675c5cf59d-zj8c2" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.881405 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/cf5051f9-9950-4d6f-b3fd-e2abb35de11c-monitoring-plugin-cert\") pod \"monitoring-plugin-675c5cf59d-zj8c2\" (UID: \"cf5051f9-9950-4d6f-b3fd-e2abb35de11c\") " pod="openshift-monitoring/monitoring-plugin-675c5cf59d-zj8c2" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.890087 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" event={"ID":"c0a9d86f-2f91-4412-b05f-bf6a31367dac","Type":"ContainerStarted","Data":"86292885047dbf10000bb3635284a6fef36f0ebaebb29d3373acb456eaaa3813"} Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.890316 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" event={"ID":"c0a9d86f-2f91-4412-b05f-bf6a31367dac","Type":"ContainerStarted","Data":"df7a59bb42d8e94f507dbaf47e8e0d9cd8c7f8608e00fca4288c007c5e71c90e"} Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.890381 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" event={"ID":"c0a9d86f-2f91-4412-b05f-bf6a31367dac","Type":"ContainerStarted","Data":"6ca7e151f31b34eca48411005dd039c946c89770a483c77fc73331deb7840dfc"} Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.892674 4832 generic.go:334] "Generic (PLEG): container finished" podID="144c7fb3-5112-46d8-acfd-ec9afe4ebf07" containerID="69bb7c694c788252eeddca3bbd627d8671d488d8e85a0507c63b4fbd552077cb" exitCode=0 Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.892761 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mnqzs" event={"ID":"144c7fb3-5112-46d8-acfd-ec9afe4ebf07","Type":"ContainerDied","Data":"69bb7c694c788252eeddca3bbd627d8671d488d8e85a0507c63b4fbd552077cb"} Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.898381 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" event={"ID":"035cc1a8-2ce7-4cd8-adaf-53a5ab54ee32","Type":"ContainerStarted","Data":"35bd95738fdecab9d6842165dcb14d898994865b11e2606786fbe4ebce074776"} Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.900508 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" event={"ID":"583fa0c5-0450-444a-bc3c-2fcf7ec6838b","Type":"ContainerStarted","Data":"271d81b8e69b9ac1d917c407fb48d7c515472a8fca84d490aa463d0f925f7bb5"} Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.902357 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595bf5d57c-vnfx5" event={"ID":"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f","Type":"ContainerStarted","Data":"afa1f52fa32fc99e000db1577eabddff50ad0d35e3ce182818b7b32c37e99066"} Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.902400 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595bf5d57c-vnfx5" event={"ID":"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f","Type":"ContainerStarted","Data":"275c40756a609cc95a2ca87328e7ab5d0ef8527f87c48b90d0079bce0be667a6"} Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.916154 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-98shq" podStartSLOduration=2.814814089 podStartE2EDuration="6.916130047s" podCreationTimestamp="2025-10-02 18:26:10 +0000 UTC" firstStartedPulling="2025-10-02 18:26:11.551303491 +0000 UTC m=+328.520746363" lastFinishedPulling="2025-10-02 18:26:15.652619409 +0000 UTC m=+332.622062321" observedRunningTime="2025-10-02 18:26:16.909493027 +0000 UTC m=+333.878935899" watchObservedRunningTime="2025-10-02 18:26:16.916130047 +0000 UTC m=+333.885572919" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.957732 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-2vvch" podStartSLOduration=3.171726694 podStartE2EDuration="6.957709953s" podCreationTimestamp="2025-10-02 18:26:10 +0000 UTC" firstStartedPulling="2025-10-02 18:26:11.868811169 +0000 UTC m=+328.838254041" lastFinishedPulling="2025-10-02 18:26:15.654794418 +0000 UTC m=+332.624237300" observedRunningTime="2025-10-02 18:26:16.953058526 +0000 UTC m=+333.922501408" watchObservedRunningTime="2025-10-02 18:26:16.957709953 +0000 UTC m=+333.927152825" Oct 02 18:26:16 crc kubenswrapper[4832]: I1002 18:26:16.980344 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-595bf5d57c-vnfx5" podStartSLOduration=1.980326019 podStartE2EDuration="1.980326019s" podCreationTimestamp="2025-10-02 18:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:26:16.97690589 +0000 UTC m=+333.946348762" watchObservedRunningTime="2025-10-02 18:26:16.980326019 +0000 UTC m=+333.949768891" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.040496 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-675c5cf59d-zj8c2" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.326124 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.328535 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.333613 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.333836 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-du92g360k68la" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.333962 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.334492 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.334730 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.334908 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.340112 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.340393 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.341566 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.341673 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.341789 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-khfjq" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.342049 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.342557 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.346402 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.392831 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.392874 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp5rw\" (UniqueName: \"kubernetes.io/projected/2638dfa4-2bdd-4df6-a886-7048c90debba-kube-api-access-fp5rw\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.392960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2638dfa4-2bdd-4df6-a886-7048c90debba-config-out\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.392983 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.393021 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.393121 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2638dfa4-2bdd-4df6-a886-7048c90debba-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.393183 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.393250 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.393319 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.393346 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-web-config\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.393399 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.393421 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2638dfa4-2bdd-4df6-a886-7048c90debba-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.393469 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.393489 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-config\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.393544 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.393583 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.393625 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.393690 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495049 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495111 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-config\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495145 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495190 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495218 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495238 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495285 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495307 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp5rw\" (UniqueName: \"kubernetes.io/projected/2638dfa4-2bdd-4df6-a886-7048c90debba-kube-api-access-fp5rw\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495336 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2638dfa4-2bdd-4df6-a886-7048c90debba-config-out\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495360 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495384 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495415 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2638dfa4-2bdd-4df6-a886-7048c90debba-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495454 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495494 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495517 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495541 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-web-config\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495562 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.495588 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2638dfa4-2bdd-4df6-a886-7048c90debba-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.496146 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2638dfa4-2bdd-4df6-a886-7048c90debba-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.499162 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.499578 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.500737 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.501099 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.502224 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.502234 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-web-config\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.502571 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.503198 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-config\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.503817 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.504339 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.504563 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2638dfa4-2bdd-4df6-a886-7048c90debba-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.504847 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.508496 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.514886 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp5rw\" (UniqueName: \"kubernetes.io/projected/2638dfa4-2bdd-4df6-a886-7048c90debba-kube-api-access-fp5rw\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.516132 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2638dfa4-2bdd-4df6-a886-7048c90debba-config-out\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.516801 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2638dfa4-2bdd-4df6-a886-7048c90debba-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.518724 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2638dfa4-2bdd-4df6-a886-7048c90debba-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2638dfa4-2bdd-4df6-a886-7048c90debba\") " pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.584545 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-99c5fbc7f-td55z"] Oct 02 18:26:17 crc kubenswrapper[4832]: W1002 18:26:17.589568 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0be7aba4_6ccc_4274_9341_ee62e52d8f81.slice/crio-cae16eff69f6dba20b3dfab0229b7f0a9f15f63336b75438cdb3e9bf594a70da WatchSource:0}: Error finding container cae16eff69f6dba20b3dfab0229b7f0a9f15f63336b75438cdb3e9bf594a70da: Status 404 returned error can't find the container with id cae16eff69f6dba20b3dfab0229b7f0a9f15f63336b75438cdb3e9bf594a70da Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.656911 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-675c5cf59d-zj8c2"] Oct 02 18:26:17 crc kubenswrapper[4832]: W1002 18:26:17.663438 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf5051f9_9950_4d6f_b3fd_e2abb35de11c.slice/crio-498e6b262b74c347e234564f18415f4a5beaa31437b745d93bc66b8106eb94d6 WatchSource:0}: Error finding container 498e6b262b74c347e234564f18415f4a5beaa31437b745d93bc66b8106eb94d6: Status 404 returned error can't find the container with id 498e6b262b74c347e234564f18415f4a5beaa31437b745d93bc66b8106eb94d6 Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.707103 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.909944 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mnqzs" event={"ID":"144c7fb3-5112-46d8-acfd-ec9afe4ebf07","Type":"ContainerStarted","Data":"5db3e16f1a1fa24aec506ad1ee11c9dc9eba94160abaea0a2b22b24ac5e6ccc5"} Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.910022 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-mnqzs" event={"ID":"144c7fb3-5112-46d8-acfd-ec9afe4ebf07","Type":"ContainerStarted","Data":"e6226f05d1576f00e1549ee504741f2c8eb5ed91bddfce699bdb4864e9932a26"} Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.912860 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-675c5cf59d-zj8c2" event={"ID":"cf5051f9-9950-4d6f-b3fd-e2abb35de11c","Type":"ContainerStarted","Data":"498e6b262b74c347e234564f18415f4a5beaa31437b745d93bc66b8106eb94d6"} Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.915030 4832 generic.go:334] "Generic (PLEG): container finished" podID="a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d" containerID="2fd899c8163a51981991570c4beb75ac4dc85bba4fc4cca121ab41364dbbae9d" exitCode=0 Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.915114 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d","Type":"ContainerDied","Data":"2fd899c8163a51981991570c4beb75ac4dc85bba4fc4cca121ab41364dbbae9d"} Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.924543 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" event={"ID":"0be7aba4-6ccc-4274-9341-ee62e52d8f81","Type":"ContainerStarted","Data":"cae16eff69f6dba20b3dfab0229b7f0a9f15f63336b75438cdb3e9bf594a70da"} Oct 02 18:26:17 crc kubenswrapper[4832]: I1002 18:26:17.973829 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-mnqzs" podStartSLOduration=3.406679967 podStartE2EDuration="6.97380612s" podCreationTimestamp="2025-10-02 18:26:11 +0000 UTC" firstStartedPulling="2025-10-02 18:26:12.080415676 +0000 UTC m=+329.049858548" lastFinishedPulling="2025-10-02 18:26:15.647541828 +0000 UTC m=+332.616984701" observedRunningTime="2025-10-02 18:26:17.932153382 +0000 UTC m=+334.901596274" watchObservedRunningTime="2025-10-02 18:26:17.97380612 +0000 UTC m=+334.943248992" Oct 02 18:26:18 crc kubenswrapper[4832]: I1002 18:26:18.136423 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Oct 02 18:26:18 crc kubenswrapper[4832]: W1002 18:26:18.148738 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2638dfa4_2bdd_4df6_a886_7048c90debba.slice/crio-92fff9092c074e99564dd93db94c8d479d82acf2395a91d06c27057e28305b0e WatchSource:0}: Error finding container 92fff9092c074e99564dd93db94c8d479d82acf2395a91d06c27057e28305b0e: Status 404 returned error can't find the container with id 92fff9092c074e99564dd93db94c8d479d82acf2395a91d06c27057e28305b0e Oct 02 18:26:18 crc kubenswrapper[4832]: I1002 18:26:18.937776 4832 generic.go:334] "Generic (PLEG): container finished" podID="2638dfa4-2bdd-4df6-a886-7048c90debba" containerID="20c09e678c17c2cdce71bf2fcb26cce0e7c06a354bed60a8865236546ddd2d75" exitCode=0 Oct 02 18:26:18 crc kubenswrapper[4832]: I1002 18:26:18.937920 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2638dfa4-2bdd-4df6-a886-7048c90debba","Type":"ContainerDied","Data":"20c09e678c17c2cdce71bf2fcb26cce0e7c06a354bed60a8865236546ddd2d75"} Oct 02 18:26:18 crc kubenswrapper[4832]: I1002 18:26:18.938108 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2638dfa4-2bdd-4df6-a886-7048c90debba","Type":"ContainerStarted","Data":"92fff9092c074e99564dd93db94c8d479d82acf2395a91d06c27057e28305b0e"} Oct 02 18:26:20 crc kubenswrapper[4832]: I1002 18:26:20.953756 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-675c5cf59d-zj8c2" event={"ID":"cf5051f9-9950-4d6f-b3fd-e2abb35de11c","Type":"ContainerStarted","Data":"846f935dd35363d21052eabd792016e6f0fc6d056eb86b31e26efe4e02d959b5"} Oct 02 18:26:20 crc kubenswrapper[4832]: I1002 18:26:20.954749 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-675c5cf59d-zj8c2" Oct 02 18:26:20 crc kubenswrapper[4832]: I1002 18:26:20.956852 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" event={"ID":"583fa0c5-0450-444a-bc3c-2fcf7ec6838b","Type":"ContainerStarted","Data":"f421749b850298d0494ec0c6acdec8a5d84ff51c7d5944936d6803927bc54803"} Oct 02 18:26:20 crc kubenswrapper[4832]: I1002 18:26:20.959773 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-675c5cf59d-zj8c2" Oct 02 18:26:20 crc kubenswrapper[4832]: I1002 18:26:20.969697 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-675c5cf59d-zj8c2" podStartSLOduration=2.288466289 podStartE2EDuration="4.969665414s" podCreationTimestamp="2025-10-02 18:26:16 +0000 UTC" firstStartedPulling="2025-10-02 18:26:17.665683789 +0000 UTC m=+334.635126681" lastFinishedPulling="2025-10-02 18:26:20.346882934 +0000 UTC m=+337.316325806" observedRunningTime="2025-10-02 18:26:20.967182195 +0000 UTC m=+337.936625077" watchObservedRunningTime="2025-10-02 18:26:20.969665414 +0000 UTC m=+337.939108286" Oct 02 18:26:21 crc kubenswrapper[4832]: I1002 18:26:21.642511 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-926hz" Oct 02 18:26:21 crc kubenswrapper[4832]: I1002 18:26:21.700534 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8gdws"] Oct 02 18:26:21 crc kubenswrapper[4832]: I1002 18:26:21.965029 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" event={"ID":"0be7aba4-6ccc-4274-9341-ee62e52d8f81","Type":"ContainerStarted","Data":"2640e3cd73da71f6a7043d54317733c309a941845c7bb9b2800ef2220f52857e"} Oct 02 18:26:21 crc kubenswrapper[4832]: I1002 18:26:21.968063 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" event={"ID":"583fa0c5-0450-444a-bc3c-2fcf7ec6838b","Type":"ContainerStarted","Data":"bb4d3b2234aa5fd2ec71a0273d8113faf4d057a17114a693d71e7b1ffde4578a"} Oct 02 18:26:21 crc kubenswrapper[4832]: I1002 18:26:21.988281 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" podStartSLOduration=3.035743249 podStartE2EDuration="5.98824784s" podCreationTimestamp="2025-10-02 18:26:16 +0000 UTC" firstStartedPulling="2025-10-02 18:26:17.592535054 +0000 UTC m=+334.561977926" lastFinishedPulling="2025-10-02 18:26:20.545039645 +0000 UTC m=+337.514482517" observedRunningTime="2025-10-02 18:26:21.983374065 +0000 UTC m=+338.952816987" watchObservedRunningTime="2025-10-02 18:26:21.98824784 +0000 UTC m=+338.957690712" Oct 02 18:26:22 crc kubenswrapper[4832]: I1002 18:26:22.974767 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" event={"ID":"583fa0c5-0450-444a-bc3c-2fcf7ec6838b","Type":"ContainerStarted","Data":"814ccc1052be5e7235505eafffc862abb73a37a7615b4a1b81f5e541a0c6f219"} Oct 02 18:26:22 crc kubenswrapper[4832]: I1002 18:26:22.978187 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d","Type":"ContainerStarted","Data":"9c9a3052469b548d33e761da3e9a2a576a86171f041bd16453443bfcac7e25de"} Oct 02 18:26:23 crc kubenswrapper[4832]: I1002 18:26:23.992779 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d","Type":"ContainerStarted","Data":"da0d430a3dc75919c63fd50d599d56147d9617a302ede997bf7425143588ad74"} Oct 02 18:26:23 crc kubenswrapper[4832]: I1002 18:26:23.993352 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d","Type":"ContainerStarted","Data":"4191ded20032f9db4f71236b1c48c57b70449948aa559d0bcae38f0172d12252"} Oct 02 18:26:23 crc kubenswrapper[4832]: I1002 18:26:23.993378 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d","Type":"ContainerStarted","Data":"7af6c4acfbac441e78e576e18ea2e704efa5c84bb2691ae4350956b3bdb03d5f"} Oct 02 18:26:23 crc kubenswrapper[4832]: I1002 18:26:23.993399 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d","Type":"ContainerStarted","Data":"3d98186e6485307e90ac7577f20005c56ed9a52fc9bdf8f099350b5695a27bf9"} Oct 02 18:26:23 crc kubenswrapper[4832]: I1002 18:26:23.997143 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2638dfa4-2bdd-4df6-a886-7048c90debba","Type":"ContainerStarted","Data":"4d9d114ae522fb71cf3d2b42c783647bdb8901d780ed9ca4224776d53a9fc452"} Oct 02 18:26:23 crc kubenswrapper[4832]: I1002 18:26:23.997215 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2638dfa4-2bdd-4df6-a886-7048c90debba","Type":"ContainerStarted","Data":"cbda25f2c6590137bf20ca9fc1149a1c8511b9a1d4719ef5d8a9492549d297d7"} Oct 02 18:26:23 crc kubenswrapper[4832]: I1002 18:26:23.997238 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2638dfa4-2bdd-4df6-a886-7048c90debba","Type":"ContainerStarted","Data":"05b53706ee9470c2acc139084e0552b60c8221924b1970cd8e7be74d91a5adca"} Oct 02 18:26:23 crc kubenswrapper[4832]: I1002 18:26:23.997250 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2638dfa4-2bdd-4df6-a886-7048c90debba","Type":"ContainerStarted","Data":"dad70664a782aba6856c60ea4c3b38cf1ea453b112bbd8d775ffc3dc22aa3e1a"} Oct 02 18:26:23 crc kubenswrapper[4832]: I1002 18:26:23.997290 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2638dfa4-2bdd-4df6-a886-7048c90debba","Type":"ContainerStarted","Data":"122d42ad2509e982f94809b8a575034525093ffea9779b9263f6b67832c59aaa"} Oct 02 18:26:25 crc kubenswrapper[4832]: I1002 18:26:25.024325 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2638dfa4-2bdd-4df6-a886-7048c90debba","Type":"ContainerStarted","Data":"e2df90b9f025bc9a842f47e38c372187e4961665c5c6a4cc21a6620c08c180b5"} Oct 02 18:26:25 crc kubenswrapper[4832]: I1002 18:26:25.084503 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.179620629 podStartE2EDuration="8.08447464s" podCreationTimestamp="2025-10-02 18:26:17 +0000 UTC" firstStartedPulling="2025-10-02 18:26:18.942482257 +0000 UTC m=+335.911925159" lastFinishedPulling="2025-10-02 18:26:22.847336298 +0000 UTC m=+339.816779170" observedRunningTime="2025-10-02 18:26:25.081563707 +0000 UTC m=+342.051006649" watchObservedRunningTime="2025-10-02 18:26:25.08447464 +0000 UTC m=+342.053917542" Oct 02 18:26:26 crc kubenswrapper[4832]: I1002 18:26:26.047469 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" event={"ID":"583fa0c5-0450-444a-bc3c-2fcf7ec6838b","Type":"ContainerStarted","Data":"52f2bd1d7ee80dda064a5c3f851d36654fe9389a8d42db5cfb6b6b49d7e1c050"} Oct 02 18:26:26 crc kubenswrapper[4832]: I1002 18:26:26.047809 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" event={"ID":"583fa0c5-0450-444a-bc3c-2fcf7ec6838b","Type":"ContainerStarted","Data":"8d022510e87389889b4ac87ba9e8d551edc93198084cadedaa1b3c271af728f0"} Oct 02 18:26:26 crc kubenswrapper[4832]: I1002 18:26:26.054531 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a3eb5cf4-2fef-4ce8-8dab-675ef7bbfd5d","Type":"ContainerStarted","Data":"d82339cb585a7f6ee81dee899f744aaebfa3fe139e6832a10280829eb30b1d37"} Oct 02 18:26:26 crc kubenswrapper[4832]: I1002 18:26:26.101482 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.10316592 podStartE2EDuration="14.101456784s" podCreationTimestamp="2025-10-02 18:26:12 +0000 UTC" firstStartedPulling="2025-10-02 18:26:14.104482074 +0000 UTC m=+331.073924956" lastFinishedPulling="2025-10-02 18:26:25.102772918 +0000 UTC m=+342.072215820" observedRunningTime="2025-10-02 18:26:26.098926344 +0000 UTC m=+343.068369316" watchObservedRunningTime="2025-10-02 18:26:26.101456784 +0000 UTC m=+343.070899696" Oct 02 18:26:26 crc kubenswrapper[4832]: I1002 18:26:26.153418 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:26 crc kubenswrapper[4832]: I1002 18:26:26.153518 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:26 crc kubenswrapper[4832]: I1002 18:26:26.173428 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:27 crc kubenswrapper[4832]: I1002 18:26:27.065636 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" event={"ID":"583fa0c5-0450-444a-bc3c-2fcf7ec6838b","Type":"ContainerStarted","Data":"1f25b95194ea642863994fca1ca88bca84c5895e93757f211cbfa680fd7d4c6b"} Oct 02 18:26:27 crc kubenswrapper[4832]: I1002 18:26:27.066934 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:27 crc kubenswrapper[4832]: I1002 18:26:27.073920 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:26:27 crc kubenswrapper[4832]: I1002 18:26:27.080886 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" Oct 02 18:26:27 crc kubenswrapper[4832]: I1002 18:26:27.110822 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-6b85fb77d7-5bjn2" podStartSLOduration=6.016892434 podStartE2EDuration="15.110786448s" podCreationTimestamp="2025-10-02 18:26:12 +0000 UTC" firstStartedPulling="2025-10-02 18:26:16.006074516 +0000 UTC m=+332.975517388" lastFinishedPulling="2025-10-02 18:26:25.0999685 +0000 UTC m=+342.069411402" observedRunningTime="2025-10-02 18:26:27.101058231 +0000 UTC m=+344.070501143" watchObservedRunningTime="2025-10-02 18:26:27.110786448 +0000 UTC m=+344.080229350" Oct 02 18:26:27 crc kubenswrapper[4832]: I1002 18:26:27.200017 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7ffgv"] Oct 02 18:26:27 crc kubenswrapper[4832]: I1002 18:26:27.707950 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:26:36 crc kubenswrapper[4832]: I1002 18:26:36.670776 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:36 crc kubenswrapper[4832]: I1002 18:26:36.671567 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:46 crc kubenswrapper[4832]: I1002 18:26:46.744109 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" podUID="e8d46891-e775-4f73-b366-544ba67c1adf" containerName="registry" containerID="cri-o://26de5668cd3d1fbfcfd2aae5481a713a98822ad3f648fb1a64bb77e6e6b27a03" gracePeriod=30 Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.245674 4832 generic.go:334] "Generic (PLEG): container finished" podID="e8d46891-e775-4f73-b366-544ba67c1adf" containerID="26de5668cd3d1fbfcfd2aae5481a713a98822ad3f648fb1a64bb77e6e6b27a03" exitCode=0 Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.245816 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" event={"ID":"e8d46891-e775-4f73-b366-544ba67c1adf","Type":"ContainerDied","Data":"26de5668cd3d1fbfcfd2aae5481a713a98822ad3f648fb1a64bb77e6e6b27a03"} Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.519047 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.710693 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-registry-tls\") pod \"e8d46891-e775-4f73-b366-544ba67c1adf\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.710973 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e8d46891-e775-4f73-b366-544ba67c1adf-ca-trust-extracted\") pod \"e8d46891-e775-4f73-b366-544ba67c1adf\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.711192 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"e8d46891-e775-4f73-b366-544ba67c1adf\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.711314 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8d46891-e775-4f73-b366-544ba67c1adf-trusted-ca\") pod \"e8d46891-e775-4f73-b366-544ba67c1adf\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.711373 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-bound-sa-token\") pod \"e8d46891-e775-4f73-b366-544ba67c1adf\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.711437 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klklf\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-kube-api-access-klklf\") pod \"e8d46891-e775-4f73-b366-544ba67c1adf\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.711492 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e8d46891-e775-4f73-b366-544ba67c1adf-installation-pull-secrets\") pod \"e8d46891-e775-4f73-b366-544ba67c1adf\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.711575 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e8d46891-e775-4f73-b366-544ba67c1adf-registry-certificates\") pod \"e8d46891-e775-4f73-b366-544ba67c1adf\" (UID: \"e8d46891-e775-4f73-b366-544ba67c1adf\") " Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.715247 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8d46891-e775-4f73-b366-544ba67c1adf-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e8d46891-e775-4f73-b366-544ba67c1adf" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.715907 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8d46891-e775-4f73-b366-544ba67c1adf-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e8d46891-e775-4f73-b366-544ba67c1adf" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.722659 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e8d46891-e775-4f73-b366-544ba67c1adf" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.724984 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-kube-api-access-klklf" (OuterVolumeSpecName: "kube-api-access-klklf") pod "e8d46891-e775-4f73-b366-544ba67c1adf" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf"). InnerVolumeSpecName "kube-api-access-klklf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.725181 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e8d46891-e775-4f73-b366-544ba67c1adf" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.730742 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "e8d46891-e775-4f73-b366-544ba67c1adf" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.740603 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8d46891-e775-4f73-b366-544ba67c1adf-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e8d46891-e775-4f73-b366-544ba67c1adf" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.744193 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d46891-e775-4f73-b366-544ba67c1adf-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e8d46891-e775-4f73-b366-544ba67c1adf" (UID: "e8d46891-e775-4f73-b366-544ba67c1adf"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.814418 4832 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.814491 4832 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e8d46891-e775-4f73-b366-544ba67c1adf-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.814524 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e8d46891-e775-4f73-b366-544ba67c1adf-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.814550 4832 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.814576 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klklf\" (UniqueName: \"kubernetes.io/projected/e8d46891-e775-4f73-b366-544ba67c1adf-kube-api-access-klklf\") on node \"crc\" DevicePath \"\"" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.814609 4832 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e8d46891-e775-4f73-b366-544ba67c1adf-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 02 18:26:49 crc kubenswrapper[4832]: I1002 18:26:49.814634 4832 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e8d46891-e775-4f73-b366-544ba67c1adf-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 02 18:26:50 crc kubenswrapper[4832]: I1002 18:26:50.257965 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" event={"ID":"e8d46891-e775-4f73-b366-544ba67c1adf","Type":"ContainerDied","Data":"6be3947cc118ab0993d28bb519a3075b670799bdcfb271b2e7de286ab64def08"} Oct 02 18:26:50 crc kubenswrapper[4832]: I1002 18:26:50.258034 4832 scope.go:117] "RemoveContainer" containerID="26de5668cd3d1fbfcfd2aae5481a713a98822ad3f648fb1a64bb77e6e6b27a03" Oct 02 18:26:50 crc kubenswrapper[4832]: I1002 18:26:50.258047 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8gdws" Oct 02 18:26:50 crc kubenswrapper[4832]: I1002 18:26:50.305631 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8gdws"] Oct 02 18:26:50 crc kubenswrapper[4832]: I1002 18:26:50.307943 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8gdws"] Oct 02 18:26:51 crc kubenswrapper[4832]: I1002 18:26:51.238469 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d46891-e775-4f73-b366-544ba67c1adf" path="/var/lib/kubelet/pods/e8d46891-e775-4f73-b366-544ba67c1adf/volumes" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.259674 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-7ffgv" podUID="b562a645-10d9-44f7-a4fe-d3bf63ac9185" containerName="console" containerID="cri-o://f1940dec42c8eeaad55c01cfa6cdf5b1e4d14cba92812adbc95f305952047da6" gracePeriod=15 Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.712233 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7ffgv_b562a645-10d9-44f7-a4fe-d3bf63ac9185/console/0.log" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.712489 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.873068 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-oauth-serving-cert\") pod \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.873733 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-serving-cert\") pod \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.873936 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b562a645-10d9-44f7-a4fe-d3bf63ac9185" (UID: "b562a645-10d9-44f7-a4fe-d3bf63ac9185"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.875117 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9vmd\" (UniqueName: \"kubernetes.io/projected/b562a645-10d9-44f7-a4fe-d3bf63ac9185-kube-api-access-t9vmd\") pod \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.875171 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-config\") pod \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.875228 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-oauth-config\") pod \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.875379 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-service-ca\") pod \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.875493 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-trusted-ca-bundle\") pod \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\" (UID: \"b562a645-10d9-44f7-a4fe-d3bf63ac9185\") " Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.875942 4832 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.876395 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-service-ca" (OuterVolumeSpecName: "service-ca") pod "b562a645-10d9-44f7-a4fe-d3bf63ac9185" (UID: "b562a645-10d9-44f7-a4fe-d3bf63ac9185"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.876436 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-config" (OuterVolumeSpecName: "console-config") pod "b562a645-10d9-44f7-a4fe-d3bf63ac9185" (UID: "b562a645-10d9-44f7-a4fe-d3bf63ac9185"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.877009 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b562a645-10d9-44f7-a4fe-d3bf63ac9185" (UID: "b562a645-10d9-44f7-a4fe-d3bf63ac9185"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.882998 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b562a645-10d9-44f7-a4fe-d3bf63ac9185-kube-api-access-t9vmd" (OuterVolumeSpecName: "kube-api-access-t9vmd") pod "b562a645-10d9-44f7-a4fe-d3bf63ac9185" (UID: "b562a645-10d9-44f7-a4fe-d3bf63ac9185"). InnerVolumeSpecName "kube-api-access-t9vmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.883651 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b562a645-10d9-44f7-a4fe-d3bf63ac9185" (UID: "b562a645-10d9-44f7-a4fe-d3bf63ac9185"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.888355 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b562a645-10d9-44f7-a4fe-d3bf63ac9185" (UID: "b562a645-10d9-44f7-a4fe-d3bf63ac9185"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.977681 4832 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.977756 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9vmd\" (UniqueName: \"kubernetes.io/projected/b562a645-10d9-44f7-a4fe-d3bf63ac9185-kube-api-access-t9vmd\") on node \"crc\" DevicePath \"\"" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.977786 4832 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.977811 4832 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b562a645-10d9-44f7-a4fe-d3bf63ac9185-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.977836 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:26:52 crc kubenswrapper[4832]: I1002 18:26:52.977861 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b562a645-10d9-44f7-a4fe-d3bf63ac9185-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:26:53 crc kubenswrapper[4832]: I1002 18:26:53.288305 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7ffgv_b562a645-10d9-44f7-a4fe-d3bf63ac9185/console/0.log" Oct 02 18:26:53 crc kubenswrapper[4832]: I1002 18:26:53.288381 4832 generic.go:334] "Generic (PLEG): container finished" podID="b562a645-10d9-44f7-a4fe-d3bf63ac9185" containerID="f1940dec42c8eeaad55c01cfa6cdf5b1e4d14cba92812adbc95f305952047da6" exitCode=2 Oct 02 18:26:53 crc kubenswrapper[4832]: I1002 18:26:53.288448 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7ffgv" event={"ID":"b562a645-10d9-44f7-a4fe-d3bf63ac9185","Type":"ContainerDied","Data":"f1940dec42c8eeaad55c01cfa6cdf5b1e4d14cba92812adbc95f305952047da6"} Oct 02 18:26:53 crc kubenswrapper[4832]: I1002 18:26:53.288489 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7ffgv" event={"ID":"b562a645-10d9-44f7-a4fe-d3bf63ac9185","Type":"ContainerDied","Data":"b22d68221f6c7cc1c64e68081cb575dc4c520c4d72d7c3b56546a1520e083c5c"} Oct 02 18:26:53 crc kubenswrapper[4832]: I1002 18:26:53.288490 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7ffgv" Oct 02 18:26:53 crc kubenswrapper[4832]: I1002 18:26:53.288541 4832 scope.go:117] "RemoveContainer" containerID="f1940dec42c8eeaad55c01cfa6cdf5b1e4d14cba92812adbc95f305952047da6" Oct 02 18:26:53 crc kubenswrapper[4832]: I1002 18:26:53.317641 4832 scope.go:117] "RemoveContainer" containerID="f1940dec42c8eeaad55c01cfa6cdf5b1e4d14cba92812adbc95f305952047da6" Oct 02 18:26:53 crc kubenswrapper[4832]: E1002 18:26:53.318247 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1940dec42c8eeaad55c01cfa6cdf5b1e4d14cba92812adbc95f305952047da6\": container with ID starting with f1940dec42c8eeaad55c01cfa6cdf5b1e4d14cba92812adbc95f305952047da6 not found: ID does not exist" containerID="f1940dec42c8eeaad55c01cfa6cdf5b1e4d14cba92812adbc95f305952047da6" Oct 02 18:26:53 crc kubenswrapper[4832]: I1002 18:26:53.318398 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1940dec42c8eeaad55c01cfa6cdf5b1e4d14cba92812adbc95f305952047da6"} err="failed to get container status \"f1940dec42c8eeaad55c01cfa6cdf5b1e4d14cba92812adbc95f305952047da6\": rpc error: code = NotFound desc = could not find container \"f1940dec42c8eeaad55c01cfa6cdf5b1e4d14cba92812adbc95f305952047da6\": container with ID starting with f1940dec42c8eeaad55c01cfa6cdf5b1e4d14cba92812adbc95f305952047da6 not found: ID does not exist" Oct 02 18:26:53 crc kubenswrapper[4832]: I1002 18:26:53.319168 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7ffgv"] Oct 02 18:26:53 crc kubenswrapper[4832]: I1002 18:26:53.322697 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-7ffgv"] Oct 02 18:26:55 crc kubenswrapper[4832]: I1002 18:26:55.234375 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b562a645-10d9-44f7-a4fe-d3bf63ac9185" path="/var/lib/kubelet/pods/b562a645-10d9-44f7-a4fe-d3bf63ac9185/volumes" Oct 02 18:26:56 crc kubenswrapper[4832]: I1002 18:26:56.680542 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:56 crc kubenswrapper[4832]: I1002 18:26:56.688423 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-99c5fbc7f-td55z" Oct 02 18:26:56 crc kubenswrapper[4832]: I1002 18:26:56.876209 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:26:56 crc kubenswrapper[4832]: I1002 18:26:56.876364 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:27:17 crc kubenswrapper[4832]: I1002 18:27:17.708463 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:27:17 crc kubenswrapper[4832]: I1002 18:27:17.753330 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:27:18 crc kubenswrapper[4832]: I1002 18:27:18.485542 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Oct 02 18:27:26 crc kubenswrapper[4832]: I1002 18:27:26.875819 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:27:26 crc kubenswrapper[4832]: I1002 18:27:26.876734 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.670477 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-949d8b8f8-m2cbs"] Oct 02 18:27:33 crc kubenswrapper[4832]: E1002 18:27:33.671682 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d46891-e775-4f73-b366-544ba67c1adf" containerName="registry" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.671715 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d46891-e775-4f73-b366-544ba67c1adf" containerName="registry" Oct 02 18:27:33 crc kubenswrapper[4832]: E1002 18:27:33.671765 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b562a645-10d9-44f7-a4fe-d3bf63ac9185" containerName="console" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.671783 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b562a645-10d9-44f7-a4fe-d3bf63ac9185" containerName="console" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.672059 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b562a645-10d9-44f7-a4fe-d3bf63ac9185" containerName="console" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.672090 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d46891-e775-4f73-b366-544ba67c1adf" containerName="registry" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.674985 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.694874 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-949d8b8f8-m2cbs"] Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.775508 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5n8b\" (UniqueName: \"kubernetes.io/projected/7ae2edd9-becf-44a4-aa8f-0951901a89c4-kube-api-access-n5n8b\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.775780 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-oauth-serving-cert\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.775799 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-serving-cert\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.775846 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-service-ca\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.775863 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-oauth-config\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.775897 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-config\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.776070 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-trusted-ca-bundle\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.877622 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-service-ca\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.877686 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-oauth-config\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.877740 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-config\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.877773 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-trusted-ca-bundle\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.877819 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5n8b\" (UniqueName: \"kubernetes.io/projected/7ae2edd9-becf-44a4-aa8f-0951901a89c4-kube-api-access-n5n8b\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.877858 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-oauth-serving-cert\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.877882 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-serving-cert\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.879676 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-config\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.879708 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-service-ca\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.879747 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-oauth-serving-cert\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.880240 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-trusted-ca-bundle\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.888246 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-serving-cert\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.889463 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-oauth-config\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:33 crc kubenswrapper[4832]: I1002 18:27:33.909001 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5n8b\" (UniqueName: \"kubernetes.io/projected/7ae2edd9-becf-44a4-aa8f-0951901a89c4-kube-api-access-n5n8b\") pod \"console-949d8b8f8-m2cbs\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:34 crc kubenswrapper[4832]: I1002 18:27:34.002578 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:34 crc kubenswrapper[4832]: I1002 18:27:34.273812 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-949d8b8f8-m2cbs"] Oct 02 18:27:34 crc kubenswrapper[4832]: I1002 18:27:34.590861 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-949d8b8f8-m2cbs" event={"ID":"7ae2edd9-becf-44a4-aa8f-0951901a89c4","Type":"ContainerStarted","Data":"381e8b8e88892b82401f765931b301ae1fe986295243aaa0d0b7a4cd8e54c5b3"} Oct 02 18:27:35 crc kubenswrapper[4832]: I1002 18:27:35.599014 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-949d8b8f8-m2cbs" event={"ID":"7ae2edd9-becf-44a4-aa8f-0951901a89c4","Type":"ContainerStarted","Data":"2376e04b10daf7492986cedab6d5ad5671363580201e661fa07aac1abb74cd9f"} Oct 02 18:27:35 crc kubenswrapper[4832]: I1002 18:27:35.624464 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-949d8b8f8-m2cbs" podStartSLOduration=2.624436781 podStartE2EDuration="2.624436781s" podCreationTimestamp="2025-10-02 18:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:27:35.622705516 +0000 UTC m=+412.592148418" watchObservedRunningTime="2025-10-02 18:27:35.624436781 +0000 UTC m=+412.593879683" Oct 02 18:27:44 crc kubenswrapper[4832]: I1002 18:27:44.004878 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:44 crc kubenswrapper[4832]: I1002 18:27:44.005988 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:44 crc kubenswrapper[4832]: I1002 18:27:44.012799 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:44 crc kubenswrapper[4832]: I1002 18:27:44.681094 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:27:44 crc kubenswrapper[4832]: I1002 18:27:44.763158 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-595bf5d57c-vnfx5"] Oct 02 18:27:56 crc kubenswrapper[4832]: I1002 18:27:56.876374 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:27:56 crc kubenswrapper[4832]: I1002 18:27:56.877581 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:27:56 crc kubenswrapper[4832]: I1002 18:27:56.877686 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:27:56 crc kubenswrapper[4832]: I1002 18:27:56.879589 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"029420f8fd747c5d74aa276bd82319f4ec00978e474c4a1efa16e6ab08101758"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 18:27:56 crc kubenswrapper[4832]: I1002 18:27:56.879682 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://029420f8fd747c5d74aa276bd82319f4ec00978e474c4a1efa16e6ab08101758" gracePeriod=600 Oct 02 18:27:57 crc kubenswrapper[4832]: I1002 18:27:57.786673 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="029420f8fd747c5d74aa276bd82319f4ec00978e474c4a1efa16e6ab08101758" exitCode=0 Oct 02 18:27:57 crc kubenswrapper[4832]: I1002 18:27:57.786900 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"029420f8fd747c5d74aa276bd82319f4ec00978e474c4a1efa16e6ab08101758"} Oct 02 18:27:57 crc kubenswrapper[4832]: I1002 18:27:57.787170 4832 scope.go:117] "RemoveContainer" containerID="b4bf70bdf85a4e54c52b992022b9fd4953fd09eb7b48db7585495d7ea64c282c" Oct 02 18:27:58 crc kubenswrapper[4832]: I1002 18:27:58.799229 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"20aef20185e6aef6323d2c1f8a7e5979029b54a32ae8f3401b046519bdae37e1"} Oct 02 18:28:09 crc kubenswrapper[4832]: I1002 18:28:09.807501 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-595bf5d57c-vnfx5" podUID="b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f" containerName="console" containerID="cri-o://afa1f52fa32fc99e000db1577eabddff50ad0d35e3ce182818b7b32c37e99066" gracePeriod=15 Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.296942 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-595bf5d57c-vnfx5_b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f/console/0.log" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.297326 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.501153 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-serving-cert\") pod \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.501284 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-service-ca\") pod \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.501394 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-config\") pod \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.501437 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-trusted-ca-bundle\") pod \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.501474 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm4w7\" (UniqueName: \"kubernetes.io/projected/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-kube-api-access-mm4w7\") pod \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.501536 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-oauth-serving-cert\") pod \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.501576 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-oauth-config\") pod \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\" (UID: \"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f\") " Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.502230 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-config" (OuterVolumeSpecName: "console-config") pod "b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f" (UID: "b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.502348 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-service-ca" (OuterVolumeSpecName: "service-ca") pod "b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f" (UID: "b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.502596 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f" (UID: "b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.502693 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f" (UID: "b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.506532 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-kube-api-access-mm4w7" (OuterVolumeSpecName: "kube-api-access-mm4w7") pod "b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f" (UID: "b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f"). InnerVolumeSpecName "kube-api-access-mm4w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.507739 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f" (UID: "b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.508847 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f" (UID: "b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.603230 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.603261 4832 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.603290 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.603300 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm4w7\" (UniqueName: \"kubernetes.io/projected/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-kube-api-access-mm4w7\") on node \"crc\" DevicePath \"\"" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.603310 4832 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.603320 4832 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.603330 4832 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.897471 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-595bf5d57c-vnfx5_b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f/console/0.log" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.897561 4832 generic.go:334] "Generic (PLEG): container finished" podID="b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f" containerID="afa1f52fa32fc99e000db1577eabddff50ad0d35e3ce182818b7b32c37e99066" exitCode=2 Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.897618 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595bf5d57c-vnfx5" event={"ID":"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f","Type":"ContainerDied","Data":"afa1f52fa32fc99e000db1577eabddff50ad0d35e3ce182818b7b32c37e99066"} Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.897681 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595bf5d57c-vnfx5" event={"ID":"b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f","Type":"ContainerDied","Data":"275c40756a609cc95a2ca87328e7ab5d0ef8527f87c48b90d0079bce0be667a6"} Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.897695 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595bf5d57c-vnfx5" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.897717 4832 scope.go:117] "RemoveContainer" containerID="afa1f52fa32fc99e000db1577eabddff50ad0d35e3ce182818b7b32c37e99066" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.938798 4832 scope.go:117] "RemoveContainer" containerID="afa1f52fa32fc99e000db1577eabddff50ad0d35e3ce182818b7b32c37e99066" Oct 02 18:28:10 crc kubenswrapper[4832]: E1002 18:28:10.939995 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afa1f52fa32fc99e000db1577eabddff50ad0d35e3ce182818b7b32c37e99066\": container with ID starting with afa1f52fa32fc99e000db1577eabddff50ad0d35e3ce182818b7b32c37e99066 not found: ID does not exist" containerID="afa1f52fa32fc99e000db1577eabddff50ad0d35e3ce182818b7b32c37e99066" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.940056 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa1f52fa32fc99e000db1577eabddff50ad0d35e3ce182818b7b32c37e99066"} err="failed to get container status \"afa1f52fa32fc99e000db1577eabddff50ad0d35e3ce182818b7b32c37e99066\": rpc error: code = NotFound desc = could not find container \"afa1f52fa32fc99e000db1577eabddff50ad0d35e3ce182818b7b32c37e99066\": container with ID starting with afa1f52fa32fc99e000db1577eabddff50ad0d35e3ce182818b7b32c37e99066 not found: ID does not exist" Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.960811 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-595bf5d57c-vnfx5"] Oct 02 18:28:10 crc kubenswrapper[4832]: I1002 18:28:10.967838 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-595bf5d57c-vnfx5"] Oct 02 18:28:11 crc kubenswrapper[4832]: I1002 18:28:11.238373 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f" path="/var/lib/kubelet/pods/b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f/volumes" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.149399 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc"] Oct 02 18:30:00 crc kubenswrapper[4832]: E1002 18:30:00.150413 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f" containerName="console" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.150436 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f" containerName="console" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.150713 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b684e9c6-f9cf-4b5c-914a-3cb52a34ac9f" containerName="console" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.151648 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.154910 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.158039 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.161121 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc"] Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.262580 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flsbx\" (UniqueName: \"kubernetes.io/projected/aed091cf-edba-49a6-96fc-878ea590bfa8-kube-api-access-flsbx\") pod \"collect-profiles-29323830-6x2gc\" (UID: \"aed091cf-edba-49a6-96fc-878ea590bfa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.262803 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aed091cf-edba-49a6-96fc-878ea590bfa8-config-volume\") pod \"collect-profiles-29323830-6x2gc\" (UID: \"aed091cf-edba-49a6-96fc-878ea590bfa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.262867 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aed091cf-edba-49a6-96fc-878ea590bfa8-secret-volume\") pod \"collect-profiles-29323830-6x2gc\" (UID: \"aed091cf-edba-49a6-96fc-878ea590bfa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.365110 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flsbx\" (UniqueName: \"kubernetes.io/projected/aed091cf-edba-49a6-96fc-878ea590bfa8-kube-api-access-flsbx\") pod \"collect-profiles-29323830-6x2gc\" (UID: \"aed091cf-edba-49a6-96fc-878ea590bfa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.365238 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aed091cf-edba-49a6-96fc-878ea590bfa8-config-volume\") pod \"collect-profiles-29323830-6x2gc\" (UID: \"aed091cf-edba-49a6-96fc-878ea590bfa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.365301 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aed091cf-edba-49a6-96fc-878ea590bfa8-secret-volume\") pod \"collect-profiles-29323830-6x2gc\" (UID: \"aed091cf-edba-49a6-96fc-878ea590bfa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.367205 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aed091cf-edba-49a6-96fc-878ea590bfa8-config-volume\") pod \"collect-profiles-29323830-6x2gc\" (UID: \"aed091cf-edba-49a6-96fc-878ea590bfa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.376761 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aed091cf-edba-49a6-96fc-878ea590bfa8-secret-volume\") pod \"collect-profiles-29323830-6x2gc\" (UID: \"aed091cf-edba-49a6-96fc-878ea590bfa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.393845 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flsbx\" (UniqueName: \"kubernetes.io/projected/aed091cf-edba-49a6-96fc-878ea590bfa8-kube-api-access-flsbx\") pod \"collect-profiles-29323830-6x2gc\" (UID: \"aed091cf-edba-49a6-96fc-878ea590bfa8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.474715 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" Oct 02 18:30:00 crc kubenswrapper[4832]: I1002 18:30:00.736130 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc"] Oct 02 18:30:01 crc kubenswrapper[4832]: I1002 18:30:01.746163 4832 generic.go:334] "Generic (PLEG): container finished" podID="aed091cf-edba-49a6-96fc-878ea590bfa8" containerID="10ba5474cb84559a020e1d27a7a0af7fd0ebb8b2d5f8e7aecb9b5d0ccc96232f" exitCode=0 Oct 02 18:30:01 crc kubenswrapper[4832]: I1002 18:30:01.746329 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" event={"ID":"aed091cf-edba-49a6-96fc-878ea590bfa8","Type":"ContainerDied","Data":"10ba5474cb84559a020e1d27a7a0af7fd0ebb8b2d5f8e7aecb9b5d0ccc96232f"} Oct 02 18:30:01 crc kubenswrapper[4832]: I1002 18:30:01.747204 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" event={"ID":"aed091cf-edba-49a6-96fc-878ea590bfa8","Type":"ContainerStarted","Data":"400e38f2f4cc29d9b55c09a7ac180b412402d392551efb5108ecd4050fe08c6a"} Oct 02 18:30:03 crc kubenswrapper[4832]: I1002 18:30:03.000229 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" Oct 02 18:30:03 crc kubenswrapper[4832]: I1002 18:30:03.108293 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aed091cf-edba-49a6-96fc-878ea590bfa8-secret-volume\") pod \"aed091cf-edba-49a6-96fc-878ea590bfa8\" (UID: \"aed091cf-edba-49a6-96fc-878ea590bfa8\") " Oct 02 18:30:03 crc kubenswrapper[4832]: I1002 18:30:03.108482 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flsbx\" (UniqueName: \"kubernetes.io/projected/aed091cf-edba-49a6-96fc-878ea590bfa8-kube-api-access-flsbx\") pod \"aed091cf-edba-49a6-96fc-878ea590bfa8\" (UID: \"aed091cf-edba-49a6-96fc-878ea590bfa8\") " Oct 02 18:30:03 crc kubenswrapper[4832]: I1002 18:30:03.108547 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aed091cf-edba-49a6-96fc-878ea590bfa8-config-volume\") pod \"aed091cf-edba-49a6-96fc-878ea590bfa8\" (UID: \"aed091cf-edba-49a6-96fc-878ea590bfa8\") " Oct 02 18:30:03 crc kubenswrapper[4832]: I1002 18:30:03.109356 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aed091cf-edba-49a6-96fc-878ea590bfa8-config-volume" (OuterVolumeSpecName: "config-volume") pod "aed091cf-edba-49a6-96fc-878ea590bfa8" (UID: "aed091cf-edba-49a6-96fc-878ea590bfa8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:30:03 crc kubenswrapper[4832]: I1002 18:30:03.116362 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aed091cf-edba-49a6-96fc-878ea590bfa8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aed091cf-edba-49a6-96fc-878ea590bfa8" (UID: "aed091cf-edba-49a6-96fc-878ea590bfa8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:30:03 crc kubenswrapper[4832]: I1002 18:30:03.117119 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aed091cf-edba-49a6-96fc-878ea590bfa8-kube-api-access-flsbx" (OuterVolumeSpecName: "kube-api-access-flsbx") pod "aed091cf-edba-49a6-96fc-878ea590bfa8" (UID: "aed091cf-edba-49a6-96fc-878ea590bfa8"). InnerVolumeSpecName "kube-api-access-flsbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:30:03 crc kubenswrapper[4832]: I1002 18:30:03.210628 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aed091cf-edba-49a6-96fc-878ea590bfa8-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:03 crc kubenswrapper[4832]: I1002 18:30:03.210680 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flsbx\" (UniqueName: \"kubernetes.io/projected/aed091cf-edba-49a6-96fc-878ea590bfa8-kube-api-access-flsbx\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:03 crc kubenswrapper[4832]: I1002 18:30:03.210690 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aed091cf-edba-49a6-96fc-878ea590bfa8-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:03 crc kubenswrapper[4832]: I1002 18:30:03.760821 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" event={"ID":"aed091cf-edba-49a6-96fc-878ea590bfa8","Type":"ContainerDied","Data":"400e38f2f4cc29d9b55c09a7ac180b412402d392551efb5108ecd4050fe08c6a"} Oct 02 18:30:03 crc kubenswrapper[4832]: I1002 18:30:03.760877 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="400e38f2f4cc29d9b55c09a7ac180b412402d392551efb5108ecd4050fe08c6a" Oct 02 18:30:03 crc kubenswrapper[4832]: I1002 18:30:03.760924 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc" Oct 02 18:30:26 crc kubenswrapper[4832]: I1002 18:30:26.875712 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:30:27 crc kubenswrapper[4832]: I1002 18:30:26.876314 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.152979 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf"] Oct 02 18:30:41 crc kubenswrapper[4832]: E1002 18:30:41.153761 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed091cf-edba-49a6-96fc-878ea590bfa8" containerName="collect-profiles" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.153775 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed091cf-edba-49a6-96fc-878ea590bfa8" containerName="collect-profiles" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.153901 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="aed091cf-edba-49a6-96fc-878ea590bfa8" containerName="collect-profiles" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.154817 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.158068 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.164429 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf"] Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.288799 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf\" (UID: \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.289138 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf\" (UID: \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.289480 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4trp8\" (UniqueName: \"kubernetes.io/projected/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-kube-api-access-4trp8\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf\" (UID: \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.390755 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf\" (UID: \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.390815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4trp8\" (UniqueName: \"kubernetes.io/projected/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-kube-api-access-4trp8\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf\" (UID: \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.390870 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf\" (UID: \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.391330 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-bundle\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf\" (UID: \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.391355 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-util\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf\" (UID: \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.417285 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4trp8\" (UniqueName: \"kubernetes.io/projected/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-kube-api-access-4trp8\") pod \"a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf\" (UID: \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\") " pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.484049 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" Oct 02 18:30:41 crc kubenswrapper[4832]: I1002 18:30:41.692979 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf"] Oct 02 18:30:42 crc kubenswrapper[4832]: I1002 18:30:42.021760 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" event={"ID":"7c5c5779-5cc4-48b6-92ad-5c2e2248804d","Type":"ContainerStarted","Data":"2fc81ab164fd02e7a9abdf9fa8caf4ea8eba691f88b204f13201ada3a6fa6f68"} Oct 02 18:30:42 crc kubenswrapper[4832]: I1002 18:30:42.022292 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" event={"ID":"7c5c5779-5cc4-48b6-92ad-5c2e2248804d","Type":"ContainerStarted","Data":"1274de5152e327233dbdf1be66a3c154798174140b599bd5d924551e0cdcf665"} Oct 02 18:30:43 crc kubenswrapper[4832]: I1002 18:30:43.029765 4832 generic.go:334] "Generic (PLEG): container finished" podID="7c5c5779-5cc4-48b6-92ad-5c2e2248804d" containerID="2fc81ab164fd02e7a9abdf9fa8caf4ea8eba691f88b204f13201ada3a6fa6f68" exitCode=0 Oct 02 18:30:43 crc kubenswrapper[4832]: I1002 18:30:43.029842 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" event={"ID":"7c5c5779-5cc4-48b6-92ad-5c2e2248804d","Type":"ContainerDied","Data":"2fc81ab164fd02e7a9abdf9fa8caf4ea8eba691f88b204f13201ada3a6fa6f68"} Oct 02 18:30:43 crc kubenswrapper[4832]: I1002 18:30:43.032677 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 18:30:45 crc kubenswrapper[4832]: I1002 18:30:45.043901 4832 generic.go:334] "Generic (PLEG): container finished" podID="7c5c5779-5cc4-48b6-92ad-5c2e2248804d" containerID="aa283bd13b9fa2a013c42d7e81c989d11790d742742d160ee488075655045b17" exitCode=0 Oct 02 18:30:45 crc kubenswrapper[4832]: I1002 18:30:45.043977 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" event={"ID":"7c5c5779-5cc4-48b6-92ad-5c2e2248804d","Type":"ContainerDied","Data":"aa283bd13b9fa2a013c42d7e81c989d11790d742742d160ee488075655045b17"} Oct 02 18:30:46 crc kubenswrapper[4832]: I1002 18:30:46.055067 4832 generic.go:334] "Generic (PLEG): container finished" podID="7c5c5779-5cc4-48b6-92ad-5c2e2248804d" containerID="e85819398d662a047da773f98764ae0f3734c25139f9e109449ff5d69243aace" exitCode=0 Oct 02 18:30:46 crc kubenswrapper[4832]: I1002 18:30:46.055145 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" event={"ID":"7c5c5779-5cc4-48b6-92ad-5c2e2248804d","Type":"ContainerDied","Data":"e85819398d662a047da773f98764ae0f3734c25139f9e109449ff5d69243aace"} Oct 02 18:30:47 crc kubenswrapper[4832]: I1002 18:30:47.310223 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" Oct 02 18:30:47 crc kubenswrapper[4832]: I1002 18:30:47.400613 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-util\") pod \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\" (UID: \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\") " Oct 02 18:30:47 crc kubenswrapper[4832]: I1002 18:30:47.400832 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-bundle\") pod \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\" (UID: \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\") " Oct 02 18:30:47 crc kubenswrapper[4832]: I1002 18:30:47.400904 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4trp8\" (UniqueName: \"kubernetes.io/projected/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-kube-api-access-4trp8\") pod \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\" (UID: \"7c5c5779-5cc4-48b6-92ad-5c2e2248804d\") " Oct 02 18:30:47 crc kubenswrapper[4832]: I1002 18:30:47.407545 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-kube-api-access-4trp8" (OuterVolumeSpecName: "kube-api-access-4trp8") pod "7c5c5779-5cc4-48b6-92ad-5c2e2248804d" (UID: "7c5c5779-5cc4-48b6-92ad-5c2e2248804d"). InnerVolumeSpecName "kube-api-access-4trp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:30:47 crc kubenswrapper[4832]: I1002 18:30:47.408024 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-bundle" (OuterVolumeSpecName: "bundle") pod "7c5c5779-5cc4-48b6-92ad-5c2e2248804d" (UID: "7c5c5779-5cc4-48b6-92ad-5c2e2248804d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:30:47 crc kubenswrapper[4832]: I1002 18:30:47.486345 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-util" (OuterVolumeSpecName: "util") pod "7c5c5779-5cc4-48b6-92ad-5c2e2248804d" (UID: "7c5c5779-5cc4-48b6-92ad-5c2e2248804d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:30:47 crc kubenswrapper[4832]: I1002 18:30:47.502896 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:47 crc kubenswrapper[4832]: I1002 18:30:47.502935 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4trp8\" (UniqueName: \"kubernetes.io/projected/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-kube-api-access-4trp8\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:47 crc kubenswrapper[4832]: I1002 18:30:47.502951 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c5c5779-5cc4-48b6-92ad-5c2e2248804d-util\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:48 crc kubenswrapper[4832]: I1002 18:30:48.073674 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" event={"ID":"7c5c5779-5cc4-48b6-92ad-5c2e2248804d","Type":"ContainerDied","Data":"1274de5152e327233dbdf1be66a3c154798174140b599bd5d924551e0cdcf665"} Oct 02 18:30:48 crc kubenswrapper[4832]: I1002 18:30:48.073716 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1274de5152e327233dbdf1be66a3c154798174140b599bd5d924551e0cdcf665" Oct 02 18:30:48 crc kubenswrapper[4832]: I1002 18:30:48.073751 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf" Oct 02 18:30:52 crc kubenswrapper[4832]: I1002 18:30:52.842019 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9sz9w"] Oct 02 18:30:52 crc kubenswrapper[4832]: I1002 18:30:52.842854 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovn-controller" containerID="cri-o://a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc" gracePeriod=30 Oct 02 18:30:52 crc kubenswrapper[4832]: I1002 18:30:52.843175 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="sbdb" containerID="cri-o://edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339" gracePeriod=30 Oct 02 18:30:52 crc kubenswrapper[4832]: I1002 18:30:52.843211 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="nbdb" containerID="cri-o://87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76" gracePeriod=30 Oct 02 18:30:52 crc kubenswrapper[4832]: I1002 18:30:52.843239 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="northd" containerID="cri-o://a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd" gracePeriod=30 Oct 02 18:30:52 crc kubenswrapper[4832]: I1002 18:30:52.843324 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="kube-rbac-proxy-node" containerID="cri-o://35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1" gracePeriod=30 Oct 02 18:30:52 crc kubenswrapper[4832]: I1002 18:30:52.843336 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b" gracePeriod=30 Oct 02 18:30:52 crc kubenswrapper[4832]: I1002 18:30:52.843400 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovn-acl-logging" containerID="cri-o://04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e" gracePeriod=30 Oct 02 18:30:52 crc kubenswrapper[4832]: I1002 18:30:52.874312 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" containerID="cri-o://7b7ed9c483dab864dba141917bd6140e6b26be62665400cf5e2a0ddc4cbc418e" gracePeriod=30 Oct 02 18:30:53 crc kubenswrapper[4832]: I1002 18:30:53.111923 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/3.log" Oct 02 18:30:53 crc kubenswrapper[4832]: I1002 18:30:53.115573 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovn-acl-logging/0.log" Oct 02 18:30:53 crc kubenswrapper[4832]: I1002 18:30:53.116290 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovn-controller/0.log" Oct 02 18:30:53 crc kubenswrapper[4832]: I1002 18:30:53.116729 4832 generic.go:334] "Generic (PLEG): container finished" podID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerID="04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e" exitCode=143 Oct 02 18:30:53 crc kubenswrapper[4832]: I1002 18:30:53.116758 4832 generic.go:334] "Generic (PLEG): container finished" podID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerID="a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc" exitCode=143 Oct 02 18:30:53 crc kubenswrapper[4832]: I1002 18:30:53.116809 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerDied","Data":"04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e"} Oct 02 18:30:53 crc kubenswrapper[4832]: I1002 18:30:53.116856 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerDied","Data":"a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc"} Oct 02 18:30:53 crc kubenswrapper[4832]: I1002 18:30:53.123444 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhm4n_7319e265-17de-4801-8ab7-7671dba7489d/kube-multus/2.log" Oct 02 18:30:53 crc kubenswrapper[4832]: I1002 18:30:53.123927 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhm4n_7319e265-17de-4801-8ab7-7671dba7489d/kube-multus/1.log" Oct 02 18:30:53 crc kubenswrapper[4832]: I1002 18:30:53.123979 4832 generic.go:334] "Generic (PLEG): container finished" podID="7319e265-17de-4801-8ab7-7671dba7489d" containerID="3fd04f87293784afc729f1d771a6655a3c23151c34c8517161cd3a820cb2cbc5" exitCode=2 Oct 02 18:30:53 crc kubenswrapper[4832]: I1002 18:30:53.124016 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhm4n" event={"ID":"7319e265-17de-4801-8ab7-7671dba7489d","Type":"ContainerDied","Data":"3fd04f87293784afc729f1d771a6655a3c23151c34c8517161cd3a820cb2cbc5"} Oct 02 18:30:53 crc kubenswrapper[4832]: I1002 18:30:53.124076 4832 scope.go:117] "RemoveContainer" containerID="fb1426b011d18013e2707b04e8f6d79821c592635976c3a58c7ff94c0f2135c3" Oct 02 18:30:53 crc kubenswrapper[4832]: I1002 18:30:53.124771 4832 scope.go:117] "RemoveContainer" containerID="3fd04f87293784afc729f1d771a6655a3c23151c34c8517161cd3a820cb2cbc5" Oct 02 18:30:53 crc kubenswrapper[4832]: E1002 18:30:53.125531 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lhm4n_openshift-multus(7319e265-17de-4801-8ab7-7671dba7489d)\"" pod="openshift-multus/multus-lhm4n" podUID="7319e265-17de-4801-8ab7-7671dba7489d" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.131787 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhm4n_7319e265-17de-4801-8ab7-7671dba7489d/kube-multus/2.log" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.134567 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovnkube-controller/3.log" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.136829 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovn-acl-logging/0.log" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.137459 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovn-controller/0.log" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.137930 4832 generic.go:334] "Generic (PLEG): container finished" podID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerID="7b7ed9c483dab864dba141917bd6140e6b26be62665400cf5e2a0ddc4cbc418e" exitCode=0 Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.137955 4832 generic.go:334] "Generic (PLEG): container finished" podID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerID="edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339" exitCode=0 Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.137962 4832 generic.go:334] "Generic (PLEG): container finished" podID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerID="87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76" exitCode=0 Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.137969 4832 generic.go:334] "Generic (PLEG): container finished" podID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerID="a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd" exitCode=0 Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.137976 4832 generic.go:334] "Generic (PLEG): container finished" podID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerID="e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b" exitCode=0 Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.137982 4832 generic.go:334] "Generic (PLEG): container finished" podID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerID="35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1" exitCode=0 Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.137997 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerDied","Data":"7b7ed9c483dab864dba141917bd6140e6b26be62665400cf5e2a0ddc4cbc418e"} Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.138040 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerDied","Data":"edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339"} Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.138057 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerDied","Data":"87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76"} Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.138070 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerDied","Data":"a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd"} Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.138084 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerDied","Data":"e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b"} Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.138101 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerDied","Data":"35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1"} Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.138106 4832 scope.go:117] "RemoveContainer" containerID="3c725ce28ecdcdfb0f963e6c9c644ced6d2a5e3be33f29a3810ad81eeb3157bb" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.290556 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339 is running failed: container process not found" containerID="edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.290650 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76 is running failed: container process not found" containerID="87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.290998 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76 is running failed: container process not found" containerID="87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.291080 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339 is running failed: container process not found" containerID="edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.291316 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76 is running failed: container process not found" containerID="87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.291401 4832 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="nbdb" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.291462 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339 is running failed: container process not found" containerID="edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.291541 4832 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="sbdb" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.587154 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovn-acl-logging/0.log" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.587805 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovn-controller/0.log" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.588254 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712070 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-systemd\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712149 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rdj6\" (UniqueName: \"kubernetes.io/projected/28e6c98b-e4b6-4027-8cf5-655985e80fac-kube-api-access-2rdj6\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712173 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-env-overrides\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712197 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-run-ovn-kubernetes\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712217 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovnkube-config\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712236 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovnkube-script-lib\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712277 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-var-lib-openvswitch\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712320 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-ovn\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712340 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-cni-bin\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712370 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-run-netns\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712393 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-etc-openvswitch\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712418 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-cni-netd\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712437 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-kubelet\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712464 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-systemd-units\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712481 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-var-lib-cni-networks-ovn-kubernetes\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712504 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-log-socket\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712517 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-node-log\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712552 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-openvswitch\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712574 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovn-node-metrics-cert\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712598 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-slash\") pod \"28e6c98b-e4b6-4027-8cf5-655985e80fac\" (UID: \"28e6c98b-e4b6-4027-8cf5-655985e80fac\") " Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712855 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-slash" (OuterVolumeSpecName: "host-slash") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.712986 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713024 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713053 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713096 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713100 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713114 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713155 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713184 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713203 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713218 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713236 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713254 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-log-socket" (OuterVolumeSpecName: "log-socket") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713286 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-node-log" (OuterVolumeSpecName: "node-log") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713301 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713469 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.713482 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.717851 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28e6c98b-e4b6-4027-8cf5-655985e80fac-kube-api-access-2rdj6" (OuterVolumeSpecName: "kube-api-access-2rdj6") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "kube-api-access-2rdj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.728991 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.737715 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "28e6c98b-e4b6-4027-8cf5-655985e80fac" (UID: "28e6c98b-e4b6-4027-8cf5-655985e80fac"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814556 4832 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814590 4832 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814600 4832 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814609 4832 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814618 4832 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814626 4832 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814635 4832 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814647 4832 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-log-socket\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814655 4832 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-node-log\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814662 4832 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814672 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814681 4832 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-slash\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814688 4832 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814698 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rdj6\" (UniqueName: \"kubernetes.io/projected/28e6c98b-e4b6-4027-8cf5-655985e80fac-kube-api-access-2rdj6\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814707 4832 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814715 4832 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814724 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814733 4832 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/28e6c98b-e4b6-4027-8cf5-655985e80fac-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814741 4832 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.814748 4832 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/28e6c98b-e4b6-4027-8cf5-655985e80fac-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847509 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g4tks"] Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847748 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovn-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847780 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovn-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847794 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5c5779-5cc4-48b6-92ad-5c2e2248804d" containerName="util" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847800 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5c5779-5cc4-48b6-92ad-5c2e2248804d" containerName="util" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847807 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovn-acl-logging" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847816 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovn-acl-logging" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847825 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="kubecfg-setup" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847832 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="kubecfg-setup" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847842 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="kube-rbac-proxy-node" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847848 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="kube-rbac-proxy-node" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847854 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847860 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847868 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847874 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847884 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="kube-rbac-proxy-ovn-metrics" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847891 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="kube-rbac-proxy-ovn-metrics" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847898 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="nbdb" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847903 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="nbdb" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847910 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847916 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847924 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5c5779-5cc4-48b6-92ad-5c2e2248804d" containerName="pull" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847930 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5c5779-5cc4-48b6-92ad-5c2e2248804d" containerName="pull" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847939 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="sbdb" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847948 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="sbdb" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847956 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5c5779-5cc4-48b6-92ad-5c2e2248804d" containerName="extract" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847962 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5c5779-5cc4-48b6-92ad-5c2e2248804d" containerName="extract" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847972 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847978 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.847984 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="northd" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.847989 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="northd" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.848086 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.848096 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.848103 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovn-acl-logging" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.848110 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="northd" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.848119 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="kube-rbac-proxy-ovn-metrics" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.848125 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="nbdb" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.848132 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.848139 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="sbdb" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.848147 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.848157 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c5c5779-5cc4-48b6-92ad-5c2e2248804d" containerName="extract" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.848167 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="kube-rbac-proxy-node" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.848177 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovn-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: E1002 18:30:54.848289 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.848296 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.848388 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" containerName="ovnkube-controller" Oct 02 18:30:54 crc kubenswrapper[4832]: I1002 18:30:54.850053 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.016950 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5cbcab50-8876-476c-8983-daab0a01ca16-ovnkube-config\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017000 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-slash\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017029 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-cni-netd\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017171 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-run-systemd\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017207 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-run-openvswitch\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017234 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5cbcab50-8876-476c-8983-daab0a01ca16-env-overrides\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017273 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-cni-bin\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017297 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5cbcab50-8876-476c-8983-daab0a01ca16-ovn-node-metrics-cert\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017315 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-systemd-units\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017361 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-node-log\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017382 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-etc-openvswitch\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017408 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-log-socket\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017432 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017447 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-run-ovn\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017474 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-var-lib-openvswitch\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017493 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017525 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-kubelet\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017538 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-run-netns\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017581 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5cbcab50-8876-476c-8983-daab0a01ca16-ovnkube-script-lib\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.017597 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqhfr\" (UniqueName: \"kubernetes.io/projected/5cbcab50-8876-476c-8983-daab0a01ca16-kube-api-access-nqhfr\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119022 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-run-systemd\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119284 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-run-openvswitch\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119157 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-run-systemd\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119401 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-run-openvswitch\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119356 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5cbcab50-8876-476c-8983-daab0a01ca16-env-overrides\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119468 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-cni-bin\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119495 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5cbcab50-8876-476c-8983-daab0a01ca16-ovn-node-metrics-cert\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119512 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-systemd-units\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119561 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-node-log\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119579 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-etc-openvswitch\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119599 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-log-socket\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119619 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119627 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-systemd-units\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119642 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-run-ovn\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119626 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-cni-bin\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119658 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-log-socket\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119699 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-node-log\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119693 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-run-ovn\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119743 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-run-ovn-kubernetes\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119762 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-etc-openvswitch\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119855 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-var-lib-openvswitch\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119912 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119946 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-var-lib-openvswitch\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119960 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-kubelet\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119984 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-run-netns\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.120005 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-kubelet\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.119986 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.120047 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-run-netns\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.120329 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5cbcab50-8876-476c-8983-daab0a01ca16-ovnkube-script-lib\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.120355 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqhfr\" (UniqueName: \"kubernetes.io/projected/5cbcab50-8876-476c-8983-daab0a01ca16-kube-api-access-nqhfr\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.120433 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5cbcab50-8876-476c-8983-daab0a01ca16-ovnkube-config\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.120465 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-slash\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.120515 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-cni-netd\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.120625 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-cni-netd\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.120567 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5cbcab50-8876-476c-8983-daab0a01ca16-host-slash\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.121063 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5cbcab50-8876-476c-8983-daab0a01ca16-env-overrides\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.121227 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5cbcab50-8876-476c-8983-daab0a01ca16-ovnkube-script-lib\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.121277 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5cbcab50-8876-476c-8983-daab0a01ca16-ovnkube-config\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.124635 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5cbcab50-8876-476c-8983-daab0a01ca16-ovn-node-metrics-cert\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.146069 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovn-acl-logging/0.log" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.146472 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9sz9w_28e6c98b-e4b6-4027-8cf5-655985e80fac/ovn-controller/0.log" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.146832 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" event={"ID":"28e6c98b-e4b6-4027-8cf5-655985e80fac","Type":"ContainerDied","Data":"ec348c0fa8c5534d4d1134e24543066d4d3a02f5504730734a0a0c14d24d4e3c"} Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.146881 4832 scope.go:117] "RemoveContainer" containerID="7b7ed9c483dab864dba141917bd6140e6b26be62665400cf5e2a0ddc4cbc418e" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.146913 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9sz9w" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.165934 4832 scope.go:117] "RemoveContainer" containerID="edaedb7689840247e62aa84f1181b6583a7e4a4f1c874e125b9aec68ac8c0339" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.179517 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqhfr\" (UniqueName: \"kubernetes.io/projected/5cbcab50-8876-476c-8983-daab0a01ca16-kube-api-access-nqhfr\") pod \"ovnkube-node-g4tks\" (UID: \"5cbcab50-8876-476c-8983-daab0a01ca16\") " pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.198274 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9sz9w"] Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.199297 4832 scope.go:117] "RemoveContainer" containerID="87f75c7af04ece52740c48874358a8450436a1c4d09c0c6caab3222e210cbb76" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.208952 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9sz9w"] Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.214525 4832 scope.go:117] "RemoveContainer" containerID="a86cd93001395e6513b2a85cbfd97b34cd60b174f939583d18dc0aa0b29de8bd" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.233403 4832 scope.go:117] "RemoveContainer" containerID="e056974503149e99512429c95103c6a2fab19123afc55294c452c6b0a8e5807b" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.254839 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28e6c98b-e4b6-4027-8cf5-655985e80fac" path="/var/lib/kubelet/pods/28e6c98b-e4b6-4027-8cf5-655985e80fac/volumes" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.295855 4832 scope.go:117] "RemoveContainer" containerID="35e81f67a94bf1ca5e990365a74fe7a4a6792a80b14a02c90d543577f0bfeaa1" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.326428 4832 scope.go:117] "RemoveContainer" containerID="04f1a50c52599d67a31d486da85e1ce44895f206b47cc50a19cb02ad26ab404e" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.341506 4832 scope.go:117] "RemoveContainer" containerID="a3793c6103654199b1e48715e87f8a949b81ad384c6913c499fe48b068fe50dc" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.362927 4832 scope.go:117] "RemoveContainer" containerID="2eddf0171384ef44c1a39bc5070b16e560933cf33a08e4e0e259debbe35d8611" Oct 02 18:30:55 crc kubenswrapper[4832]: I1002 18:30:55.462378 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:30:56 crc kubenswrapper[4832]: I1002 18:30:56.153727 4832 generic.go:334] "Generic (PLEG): container finished" podID="5cbcab50-8876-476c-8983-daab0a01ca16" containerID="e089f861aa4536b950ce5c6e59ac4d0904b98828795896bc9e78f538bb8696a2" exitCode=0 Oct 02 18:30:56 crc kubenswrapper[4832]: I1002 18:30:56.153814 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" event={"ID":"5cbcab50-8876-476c-8983-daab0a01ca16","Type":"ContainerDied","Data":"e089f861aa4536b950ce5c6e59ac4d0904b98828795896bc9e78f538bb8696a2"} Oct 02 18:30:56 crc kubenswrapper[4832]: I1002 18:30:56.154041 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" event={"ID":"5cbcab50-8876-476c-8983-daab0a01ca16","Type":"ContainerStarted","Data":"bcaac4e5729a69d743696cb707c153b9faa398ae84547cdee7c5214b29d1da9f"} Oct 02 18:30:56 crc kubenswrapper[4832]: I1002 18:30:56.874916 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:30:56 crc kubenswrapper[4832]: I1002 18:30:56.875209 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:30:57 crc kubenswrapper[4832]: I1002 18:30:57.163542 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" event={"ID":"5cbcab50-8876-476c-8983-daab0a01ca16","Type":"ContainerStarted","Data":"d9afb759e25ed2d1c23d91dc6e05608da895bd0b49b7a70456cc8f9c8a15a46e"} Oct 02 18:30:57 crc kubenswrapper[4832]: I1002 18:30:57.163582 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" event={"ID":"5cbcab50-8876-476c-8983-daab0a01ca16","Type":"ContainerStarted","Data":"79792f12ef6e3f0b520497c327b1ceb98256a38b0af2a991bf92452c4d31e8d4"} Oct 02 18:30:57 crc kubenswrapper[4832]: I1002 18:30:57.163599 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" event={"ID":"5cbcab50-8876-476c-8983-daab0a01ca16","Type":"ContainerStarted","Data":"6589f84fd1baee9e42b160af475848a8e4e6927599cef836f60bfde39bf66d42"} Oct 02 18:30:57 crc kubenswrapper[4832]: I1002 18:30:57.163609 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" event={"ID":"5cbcab50-8876-476c-8983-daab0a01ca16","Type":"ContainerStarted","Data":"808fccee1e737a310d217dc6ea8044dc033625e9f778ed22a3126766408d0149"} Oct 02 18:30:57 crc kubenswrapper[4832]: I1002 18:30:57.163619 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" event={"ID":"5cbcab50-8876-476c-8983-daab0a01ca16","Type":"ContainerStarted","Data":"63567932f632cee309d3f45b3aa5ae8a7fd8771b30c1c162cd34571784f0a5af"} Oct 02 18:30:57 crc kubenswrapper[4832]: I1002 18:30:57.163628 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" event={"ID":"5cbcab50-8876-476c-8983-daab0a01ca16","Type":"ContainerStarted","Data":"28e55ef6df597f1dce78c4ac8b63c4410b6a9538915f518a16a233c34e11f7a7"} Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.365588 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp"] Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.366637 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.368333 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.368570 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-cwxf4" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.368670 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.463207 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78tld\" (UniqueName: \"kubernetes.io/projected/38be72b3-2875-4e11-895a-d7b229709e75-kube-api-access-78tld\") pod \"obo-prometheus-operator-7c8cf85677-b4kkp\" (UID: \"38be72b3-2875-4e11-895a-d7b229709e75\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.486582 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn"] Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.487524 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.490135 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.490149 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-4p46r" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.507678 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw"] Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.508560 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.564734 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78tld\" (UniqueName: \"kubernetes.io/projected/38be72b3-2875-4e11-895a-d7b229709e75-kube-api-access-78tld\") pod \"obo-prometheus-operator-7c8cf85677-b4kkp\" (UID: \"38be72b3-2875-4e11-895a-d7b229709e75\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.583164 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78tld\" (UniqueName: \"kubernetes.io/projected/38be72b3-2875-4e11-895a-d7b229709e75-kube-api-access-78tld\") pod \"obo-prometheus-operator-7c8cf85677-b4kkp\" (UID: \"38be72b3-2875-4e11-895a-d7b229709e75\") " pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.592062 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-sgrwh"] Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.592776 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.594578 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.594575 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-9f45s" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.665497 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw\" (UID: \"4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.665557 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf1d6e7f-c76f-4888-8465-3651cdd3c079-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn\" (UID: \"cf1d6e7f-c76f-4888-8465-3651cdd3c079\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.665577 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw\" (UID: \"4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.665620 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf1d6e7f-c76f-4888-8465-3651cdd3c079-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn\" (UID: \"cf1d6e7f-c76f-4888-8465-3651cdd3c079\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.685554 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.708004 4832 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators_38be72b3-2875-4e11-895a-d7b229709e75_0(464428591e76a2dd01fb9470c2590921022a052c564e529de660458be805019d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.708067 4832 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators_38be72b3-2875-4e11-895a-d7b229709e75_0(464428591e76a2dd01fb9470c2590921022a052c564e529de660458be805019d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.708088 4832 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators_38be72b3-2875-4e11-895a-d7b229709e75_0(464428591e76a2dd01fb9470c2590921022a052c564e529de660458be805019d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.708133 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators(38be72b3-2875-4e11-895a-d7b229709e75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators(38be72b3-2875-4e11-895a-d7b229709e75)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators_38be72b3-2875-4e11-895a-d7b229709e75_0(464428591e76a2dd01fb9470c2590921022a052c564e529de660458be805019d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" podUID="38be72b3-2875-4e11-895a-d7b229709e75" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.709618 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-wczsw"] Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.710684 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.712333 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-ncwrb" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.766783 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e59cc84-d625-4121-956d-773c5be0d917-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-sgrwh\" (UID: \"9e59cc84-d625-4121-956d-773c5be0d917\") " pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.766831 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf1d6e7f-c76f-4888-8465-3651cdd3c079-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn\" (UID: \"cf1d6e7f-c76f-4888-8465-3651cdd3c079\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.766878 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdwxc\" (UniqueName: \"kubernetes.io/projected/9e59cc84-d625-4121-956d-773c5be0d917-kube-api-access-rdwxc\") pod \"observability-operator-cc5f78dfc-sgrwh\" (UID: \"9e59cc84-d625-4121-956d-773c5be0d917\") " pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.767072 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw\" (UID: \"4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.767209 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf1d6e7f-c76f-4888-8465-3651cdd3c079-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn\" (UID: \"cf1d6e7f-c76f-4888-8465-3651cdd3c079\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.767246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw\" (UID: \"4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.770674 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf1d6e7f-c76f-4888-8465-3651cdd3c079-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn\" (UID: \"cf1d6e7f-c76f-4888-8465-3651cdd3c079\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.770819 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw\" (UID: \"4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.771874 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw\" (UID: \"4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.774788 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf1d6e7f-c76f-4888-8465-3651cdd3c079-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn\" (UID: \"cf1d6e7f-c76f-4888-8465-3651cdd3c079\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.801120 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.821325 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.838762 4832 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators_cf1d6e7f-c76f-4888-8465-3651cdd3c079_0(38417bfc31efd24dcdb7c126ea478cef0425afefe8e58f015d65c45f589913f2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.838855 4832 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators_cf1d6e7f-c76f-4888-8465-3651cdd3c079_0(38417bfc31efd24dcdb7c126ea478cef0425afefe8e58f015d65c45f589913f2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.838883 4832 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators_cf1d6e7f-c76f-4888-8465-3651cdd3c079_0(38417bfc31efd24dcdb7c126ea478cef0425afefe8e58f015d65c45f589913f2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.838940 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators(cf1d6e7f-c76f-4888-8465-3651cdd3c079)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators(cf1d6e7f-c76f-4888-8465-3651cdd3c079)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators_cf1d6e7f-c76f-4888-8465-3651cdd3c079_0(38417bfc31efd24dcdb7c126ea478cef0425afefe8e58f015d65c45f589913f2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" podUID="cf1d6e7f-c76f-4888-8465-3651cdd3c079" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.847698 4832 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators_4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c_0(4e150700c0d3eb6953fcdcb120a9a7cfff9b9a2987ee323c2e41b9aea5fe081b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.847780 4832 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators_4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c_0(4e150700c0d3eb6953fcdcb120a9a7cfff9b9a2987ee323c2e41b9aea5fe081b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.847811 4832 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators_4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c_0(4e150700c0d3eb6953fcdcb120a9a7cfff9b9a2987ee323c2e41b9aea5fe081b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.847866 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators(4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators(4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators_4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c_0(4e150700c0d3eb6953fcdcb120a9a7cfff9b9a2987ee323c2e41b9aea5fe081b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" podUID="4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.868329 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/72b2ae10-6b68-4738-9474-41a9fa1f9f92-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-wczsw\" (UID: \"72b2ae10-6b68-4738-9474-41a9fa1f9f92\") " pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.868383 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-769xx\" (UniqueName: \"kubernetes.io/projected/72b2ae10-6b68-4738-9474-41a9fa1f9f92-kube-api-access-769xx\") pod \"perses-operator-54bc95c9fb-wczsw\" (UID: \"72b2ae10-6b68-4738-9474-41a9fa1f9f92\") " pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.868421 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e59cc84-d625-4121-956d-773c5be0d917-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-sgrwh\" (UID: \"9e59cc84-d625-4121-956d-773c5be0d917\") " pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.868575 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwxc\" (UniqueName: \"kubernetes.io/projected/9e59cc84-d625-4121-956d-773c5be0d917-kube-api-access-rdwxc\") pod \"observability-operator-cc5f78dfc-sgrwh\" (UID: \"9e59cc84-d625-4121-956d-773c5be0d917\") " pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.872939 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9e59cc84-d625-4121-956d-773c5be0d917-observability-operator-tls\") pod \"observability-operator-cc5f78dfc-sgrwh\" (UID: \"9e59cc84-d625-4121-956d-773c5be0d917\") " pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.893154 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwxc\" (UniqueName: \"kubernetes.io/projected/9e59cc84-d625-4121-956d-773c5be0d917-kube-api-access-rdwxc\") pod \"observability-operator-cc5f78dfc-sgrwh\" (UID: \"9e59cc84-d625-4121-956d-773c5be0d917\") " pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.915800 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.955405 4832 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-sgrwh_openshift-operators_9e59cc84-d625-4121-956d-773c5be0d917_0(818562a7292166e6fa5b0f75b7f44cafafd1534d9b327fe70b995ff946fa370f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.955778 4832 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-sgrwh_openshift-operators_9e59cc84-d625-4121-956d-773c5be0d917_0(818562a7292166e6fa5b0f75b7f44cafafd1534d9b327fe70b995ff946fa370f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.955798 4832 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-sgrwh_openshift-operators_9e59cc84-d625-4121-956d-773c5be0d917_0(818562a7292166e6fa5b0f75b7f44cafafd1534d9b327fe70b995ff946fa370f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:30:58 crc kubenswrapper[4832]: E1002 18:30:58.955842 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-cc5f78dfc-sgrwh_openshift-operators(9e59cc84-d625-4121-956d-773c5be0d917)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-cc5f78dfc-sgrwh_openshift-operators(9e59cc84-d625-4121-956d-773c5be0d917)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-sgrwh_openshift-operators_9e59cc84-d625-4121-956d-773c5be0d917_0(818562a7292166e6fa5b0f75b7f44cafafd1534d9b327fe70b995ff946fa370f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" podUID="9e59cc84-d625-4121-956d-773c5be0d917" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.969674 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/72b2ae10-6b68-4738-9474-41a9fa1f9f92-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-wczsw\" (UID: \"72b2ae10-6b68-4738-9474-41a9fa1f9f92\") " pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.969766 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-769xx\" (UniqueName: \"kubernetes.io/projected/72b2ae10-6b68-4738-9474-41a9fa1f9f92-kube-api-access-769xx\") pod \"perses-operator-54bc95c9fb-wczsw\" (UID: \"72b2ae10-6b68-4738-9474-41a9fa1f9f92\") " pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.970786 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/72b2ae10-6b68-4738-9474-41a9fa1f9f92-openshift-service-ca\") pod \"perses-operator-54bc95c9fb-wczsw\" (UID: \"72b2ae10-6b68-4738-9474-41a9fa1f9f92\") " pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:30:58 crc kubenswrapper[4832]: I1002 18:30:58.992533 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-769xx\" (UniqueName: \"kubernetes.io/projected/72b2ae10-6b68-4738-9474-41a9fa1f9f92-kube-api-access-769xx\") pod \"perses-operator-54bc95c9fb-wczsw\" (UID: \"72b2ae10-6b68-4738-9474-41a9fa1f9f92\") " pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:30:59 crc kubenswrapper[4832]: I1002 18:30:59.047535 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:30:59 crc kubenswrapper[4832]: E1002 18:30:59.066500 4832 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-wczsw_openshift-operators_72b2ae10-6b68-4738-9474-41a9fa1f9f92_0(e2415df92532ffc1d9efa25b5b6924b2a4238799c53249f9a17e8e98133c112a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:30:59 crc kubenswrapper[4832]: E1002 18:30:59.066569 4832 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-wczsw_openshift-operators_72b2ae10-6b68-4738-9474-41a9fa1f9f92_0(e2415df92532ffc1d9efa25b5b6924b2a4238799c53249f9a17e8e98133c112a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:30:59 crc kubenswrapper[4832]: E1002 18:30:59.066612 4832 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-wczsw_openshift-operators_72b2ae10-6b68-4738-9474-41a9fa1f9f92_0(e2415df92532ffc1d9efa25b5b6924b2a4238799c53249f9a17e8e98133c112a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:30:59 crc kubenswrapper[4832]: E1002 18:30:59.066658 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-54bc95c9fb-wczsw_openshift-operators(72b2ae10-6b68-4738-9474-41a9fa1f9f92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-54bc95c9fb-wczsw_openshift-operators(72b2ae10-6b68-4738-9474-41a9fa1f9f92)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-wczsw_openshift-operators_72b2ae10-6b68-4738-9474-41a9fa1f9f92_0(e2415df92532ffc1d9efa25b5b6924b2a4238799c53249f9a17e8e98133c112a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" podUID="72b2ae10-6b68-4738-9474-41a9fa1f9f92" Oct 02 18:31:00 crc kubenswrapper[4832]: I1002 18:31:00.187224 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" event={"ID":"5cbcab50-8876-476c-8983-daab0a01ca16","Type":"ContainerStarted","Data":"18adeec77a8c0b9818bfd0e70bdc83346ba8bbe20f60c2f8cc562944180072ef"} Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.201217 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" event={"ID":"5cbcab50-8876-476c-8983-daab0a01ca16","Type":"ContainerStarted","Data":"f2efe3128344670c8f07a832fda0c10f5404bf58dd69f185319d57458bf87727"} Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.201561 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.201581 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.226857 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.230963 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" podStartSLOduration=8.230949531 podStartE2EDuration="8.230949531s" podCreationTimestamp="2025-10-02 18:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:31:02.228431862 +0000 UTC m=+619.197874754" watchObservedRunningTime="2025-10-02 18:31:02.230949531 +0000 UTC m=+619.200392403" Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.351192 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-wczsw"] Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.351339 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.351797 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.359259 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn"] Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.359394 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.359839 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.372803 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-sgrwh"] Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.372930 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.373404 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.390313 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp"] Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.390672 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.391103 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.394162 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw"] Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.394259 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:31:02 crc kubenswrapper[4832]: I1002 18:31:02.394612 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.438528 4832 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-wczsw_openshift-operators_72b2ae10-6b68-4738-9474-41a9fa1f9f92_0(14ce463439a0f743a3de05ad2e0742e18d4a705087e708a944fe084b3a520ea5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.438586 4832 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-wczsw_openshift-operators_72b2ae10-6b68-4738-9474-41a9fa1f9f92_0(14ce463439a0f743a3de05ad2e0742e18d4a705087e708a944fe084b3a520ea5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.438607 4832 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-wczsw_openshift-operators_72b2ae10-6b68-4738-9474-41a9fa1f9f92_0(14ce463439a0f743a3de05ad2e0742e18d4a705087e708a944fe084b3a520ea5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.438649 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-54bc95c9fb-wczsw_openshift-operators(72b2ae10-6b68-4738-9474-41a9fa1f9f92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-54bc95c9fb-wczsw_openshift-operators(72b2ae10-6b68-4738-9474-41a9fa1f9f92)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-wczsw_openshift-operators_72b2ae10-6b68-4738-9474-41a9fa1f9f92_0(14ce463439a0f743a3de05ad2e0742e18d4a705087e708a944fe084b3a520ea5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" podUID="72b2ae10-6b68-4738-9474-41a9fa1f9f92" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.459453 4832 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators_cf1d6e7f-c76f-4888-8465-3651cdd3c079_0(85c519c1ff745d21e79722f38abfef9edd56bf7233e3861658a7e87d66e15b79): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.459520 4832 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators_cf1d6e7f-c76f-4888-8465-3651cdd3c079_0(85c519c1ff745d21e79722f38abfef9edd56bf7233e3861658a7e87d66e15b79): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.459546 4832 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators_cf1d6e7f-c76f-4888-8465-3651cdd3c079_0(85c519c1ff745d21e79722f38abfef9edd56bf7233e3861658a7e87d66e15b79): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.459599 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators(cf1d6e7f-c76f-4888-8465-3651cdd3c079)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators(cf1d6e7f-c76f-4888-8465-3651cdd3c079)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators_cf1d6e7f-c76f-4888-8465-3651cdd3c079_0(85c519c1ff745d21e79722f38abfef9edd56bf7233e3861658a7e87d66e15b79): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" podUID="cf1d6e7f-c76f-4888-8465-3651cdd3c079" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.474296 4832 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-sgrwh_openshift-operators_9e59cc84-d625-4121-956d-773c5be0d917_0(9c7b3f27af50ba353c9178a1cee187f4995c4c4697f29364e09c2e3d8e583650): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.474358 4832 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-sgrwh_openshift-operators_9e59cc84-d625-4121-956d-773c5be0d917_0(9c7b3f27af50ba353c9178a1cee187f4995c4c4697f29364e09c2e3d8e583650): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.474383 4832 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-sgrwh_openshift-operators_9e59cc84-d625-4121-956d-773c5be0d917_0(9c7b3f27af50ba353c9178a1cee187f4995c4c4697f29364e09c2e3d8e583650): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.474423 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-cc5f78dfc-sgrwh_openshift-operators(9e59cc84-d625-4121-956d-773c5be0d917)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-cc5f78dfc-sgrwh_openshift-operators(9e59cc84-d625-4121-956d-773c5be0d917)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-cc5f78dfc-sgrwh_openshift-operators_9e59cc84-d625-4121-956d-773c5be0d917_0(9c7b3f27af50ba353c9178a1cee187f4995c4c4697f29364e09c2e3d8e583650): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" podUID="9e59cc84-d625-4121-956d-773c5be0d917" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.486496 4832 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators_4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c_0(80a9b6da13562a18ba98709af443bbe5182f4ba881b87225773f968dd73ce0ff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.486550 4832 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators_4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c_0(80a9b6da13562a18ba98709af443bbe5182f4ba881b87225773f968dd73ce0ff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.486571 4832 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators_4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c_0(80a9b6da13562a18ba98709af443bbe5182f4ba881b87225773f968dd73ce0ff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.486614 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators(4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators(4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators_4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c_0(80a9b6da13562a18ba98709af443bbe5182f4ba881b87225773f968dd73ce0ff): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" podUID="4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.500870 4832 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators_38be72b3-2875-4e11-895a-d7b229709e75_0(e7da1a68dca480f12cbf254f2913e0721462998e2f6fb0a1377908fef58fa758): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.500928 4832 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators_38be72b3-2875-4e11-895a-d7b229709e75_0(e7da1a68dca480f12cbf254f2913e0721462998e2f6fb0a1377908fef58fa758): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.500946 4832 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators_38be72b3-2875-4e11-895a-d7b229709e75_0(e7da1a68dca480f12cbf254f2913e0721462998e2f6fb0a1377908fef58fa758): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:31:02 crc kubenswrapper[4832]: E1002 18:31:02.500981 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators(38be72b3-2875-4e11-895a-d7b229709e75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators(38be72b3-2875-4e11-895a-d7b229709e75)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators_38be72b3-2875-4e11-895a-d7b229709e75_0(e7da1a68dca480f12cbf254f2913e0721462998e2f6fb0a1377908fef58fa758): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" podUID="38be72b3-2875-4e11-895a-d7b229709e75" Oct 02 18:31:03 crc kubenswrapper[4832]: I1002 18:31:03.211633 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:31:03 crc kubenswrapper[4832]: I1002 18:31:03.288113 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:31:04 crc kubenswrapper[4832]: I1002 18:31:04.222893 4832 scope.go:117] "RemoveContainer" containerID="3fd04f87293784afc729f1d771a6655a3c23151c34c8517161cd3a820cb2cbc5" Oct 02 18:31:04 crc kubenswrapper[4832]: E1002 18:31:04.223120 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lhm4n_openshift-multus(7319e265-17de-4801-8ab7-7671dba7489d)\"" pod="openshift-multus/multus-lhm4n" podUID="7319e265-17de-4801-8ab7-7671dba7489d" Oct 02 18:31:14 crc kubenswrapper[4832]: I1002 18:31:14.222092 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:31:14 crc kubenswrapper[4832]: I1002 18:31:14.222135 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:31:14 crc kubenswrapper[4832]: I1002 18:31:14.223009 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:31:14 crc kubenswrapper[4832]: I1002 18:31:14.223213 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:31:14 crc kubenswrapper[4832]: E1002 18:31:14.252068 4832 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators_4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c_0(e714bb22275c98414f7b8373014dc91799b3733d121500371f04398934adb8a6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:31:14 crc kubenswrapper[4832]: E1002 18:31:14.252222 4832 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators_4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c_0(e714bb22275c98414f7b8373014dc91799b3733d121500371f04398934adb8a6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:31:14 crc kubenswrapper[4832]: E1002 18:31:14.252388 4832 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators_4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c_0(e714bb22275c98414f7b8373014dc91799b3733d121500371f04398934adb8a6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:31:14 crc kubenswrapper[4832]: E1002 18:31:14.252517 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators(4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators(4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_openshift-operators_4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c_0(e714bb22275c98414f7b8373014dc91799b3733d121500371f04398934adb8a6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" podUID="4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c" Oct 02 18:31:14 crc kubenswrapper[4832]: E1002 18:31:14.264728 4832 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-wczsw_openshift-operators_72b2ae10-6b68-4738-9474-41a9fa1f9f92_0(7e43ab4d572b016a5963b762b4dfe77529bca3cec4ee3b5ae53edaa4f9b2aa1e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:31:14 crc kubenswrapper[4832]: E1002 18:31:14.264791 4832 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-wczsw_openshift-operators_72b2ae10-6b68-4738-9474-41a9fa1f9f92_0(7e43ab4d572b016a5963b762b4dfe77529bca3cec4ee3b5ae53edaa4f9b2aa1e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:31:14 crc kubenswrapper[4832]: E1002 18:31:14.264814 4832 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-wczsw_openshift-operators_72b2ae10-6b68-4738-9474-41a9fa1f9f92_0(7e43ab4d572b016a5963b762b4dfe77529bca3cec4ee3b5ae53edaa4f9b2aa1e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:31:14 crc kubenswrapper[4832]: E1002 18:31:14.264862 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-54bc95c9fb-wczsw_openshift-operators(72b2ae10-6b68-4738-9474-41a9fa1f9f92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-54bc95c9fb-wczsw_openshift-operators(72b2ae10-6b68-4738-9474-41a9fa1f9f92)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-54bc95c9fb-wczsw_openshift-operators_72b2ae10-6b68-4738-9474-41a9fa1f9f92_0(7e43ab4d572b016a5963b762b4dfe77529bca3cec4ee3b5ae53edaa4f9b2aa1e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" podUID="72b2ae10-6b68-4738-9474-41a9fa1f9f92" Oct 02 18:31:15 crc kubenswrapper[4832]: I1002 18:31:15.226634 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:31:15 crc kubenswrapper[4832]: I1002 18:31:15.226803 4832 scope.go:117] "RemoveContainer" containerID="3fd04f87293784afc729f1d771a6655a3c23151c34c8517161cd3a820cb2cbc5" Oct 02 18:31:15 crc kubenswrapper[4832]: I1002 18:31:15.227163 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:31:15 crc kubenswrapper[4832]: E1002 18:31:15.272202 4832 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators_cf1d6e7f-c76f-4888-8465-3651cdd3c079_0(c91806883aae4a476396baeef83cd47df9aff9e7fdaac14bcf775411b16929f2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:31:15 crc kubenswrapper[4832]: E1002 18:31:15.272611 4832 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators_cf1d6e7f-c76f-4888-8465-3651cdd3c079_0(c91806883aae4a476396baeef83cd47df9aff9e7fdaac14bcf775411b16929f2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:31:15 crc kubenswrapper[4832]: E1002 18:31:15.272642 4832 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators_cf1d6e7f-c76f-4888-8465-3651cdd3c079_0(c91806883aae4a476396baeef83cd47df9aff9e7fdaac14bcf775411b16929f2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:31:15 crc kubenswrapper[4832]: E1002 18:31:15.272701 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators(cf1d6e7f-c76f-4888-8465-3651cdd3c079)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators(cf1d6e7f-c76f-4888-8465-3651cdd3c079)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_openshift-operators_cf1d6e7f-c76f-4888-8465-3651cdd3c079_0(c91806883aae4a476396baeef83cd47df9aff9e7fdaac14bcf775411b16929f2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" podUID="cf1d6e7f-c76f-4888-8465-3651cdd3c079" Oct 02 18:31:16 crc kubenswrapper[4832]: I1002 18:31:16.222625 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:31:16 crc kubenswrapper[4832]: I1002 18:31:16.223306 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:31:16 crc kubenswrapper[4832]: E1002 18:31:16.249568 4832 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators_38be72b3-2875-4e11-895a-d7b229709e75_0(1d1654eb35c8bc461ff61f8fdf1710d56347c9521aa9eab13a52f44d28365510): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 02 18:31:16 crc kubenswrapper[4832]: E1002 18:31:16.250031 4832 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators_38be72b3-2875-4e11-895a-d7b229709e75_0(1d1654eb35c8bc461ff61f8fdf1710d56347c9521aa9eab13a52f44d28365510): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:31:16 crc kubenswrapper[4832]: E1002 18:31:16.250106 4832 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators_38be72b3-2875-4e11-895a-d7b229709e75_0(1d1654eb35c8bc461ff61f8fdf1710d56347c9521aa9eab13a52f44d28365510): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:31:16 crc kubenswrapper[4832]: E1002 18:31:16.250189 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators(38be72b3-2875-4e11-895a-d7b229709e75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators(38be72b3-2875-4e11-895a-d7b229709e75)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-7c8cf85677-b4kkp_openshift-operators_38be72b3-2875-4e11-895a-d7b229709e75_0(1d1654eb35c8bc461ff61f8fdf1710d56347c9521aa9eab13a52f44d28365510): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" podUID="38be72b3-2875-4e11-895a-d7b229709e75" Oct 02 18:31:16 crc kubenswrapper[4832]: I1002 18:31:16.309520 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhm4n_7319e265-17de-4801-8ab7-7671dba7489d/kube-multus/2.log" Oct 02 18:31:16 crc kubenswrapper[4832]: I1002 18:31:16.309569 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhm4n" event={"ID":"7319e265-17de-4801-8ab7-7671dba7489d","Type":"ContainerStarted","Data":"111463deb8ef59d3212eeec82f650df472818fb01931ed1041ec053e4da61aa5"} Oct 02 18:31:17 crc kubenswrapper[4832]: I1002 18:31:17.235512 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:31:17 crc kubenswrapper[4832]: I1002 18:31:17.236342 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:31:17 crc kubenswrapper[4832]: I1002 18:31:17.468473 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-cc5f78dfc-sgrwh"] Oct 02 18:31:18 crc kubenswrapper[4832]: I1002 18:31:18.329456 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" event={"ID":"9e59cc84-d625-4121-956d-773c5be0d917","Type":"ContainerStarted","Data":"ad654f537f0ea4fa2c43ca37f05a072bdc4a9846ea03a8c669d75429db40636a"} Oct 02 18:31:25 crc kubenswrapper[4832]: I1002 18:31:25.487322 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g4tks" Oct 02 18:31:26 crc kubenswrapper[4832]: I1002 18:31:26.222378 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:31:26 crc kubenswrapper[4832]: I1002 18:31:26.223099 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:31:26 crc kubenswrapper[4832]: I1002 18:31:26.876006 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:31:26 crc kubenswrapper[4832]: I1002 18:31:26.877150 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:31:26 crc kubenswrapper[4832]: I1002 18:31:26.877340 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:31:26 crc kubenswrapper[4832]: I1002 18:31:26.878208 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"20aef20185e6aef6323d2c1f8a7e5979029b54a32ae8f3401b046519bdae37e1"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 18:31:26 crc kubenswrapper[4832]: I1002 18:31:26.878443 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://20aef20185e6aef6323d2c1f8a7e5979029b54a32ae8f3401b046519bdae37e1" gracePeriod=600 Oct 02 18:31:27 crc kubenswrapper[4832]: I1002 18:31:27.222369 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:31:27 crc kubenswrapper[4832]: I1002 18:31:27.222840 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" Oct 02 18:31:27 crc kubenswrapper[4832]: I1002 18:31:27.419572 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="20aef20185e6aef6323d2c1f8a7e5979029b54a32ae8f3401b046519bdae37e1" exitCode=0 Oct 02 18:31:27 crc kubenswrapper[4832]: I1002 18:31:27.419643 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"20aef20185e6aef6323d2c1f8a7e5979029b54a32ae8f3401b046519bdae37e1"} Oct 02 18:31:27 crc kubenswrapper[4832]: I1002 18:31:27.419735 4832 scope.go:117] "RemoveContainer" containerID="029420f8fd747c5d74aa276bd82319f4ec00978e474c4a1efa16e6ab08101758" Oct 02 18:31:29 crc kubenswrapper[4832]: I1002 18:31:29.433892 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"39c61194f4de266798fee5bed294464e772af0f2983e7b44f1e151219ed48151"} Oct 02 18:31:29 crc kubenswrapper[4832]: I1002 18:31:29.518033 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw"] Oct 02 18:31:29 crc kubenswrapper[4832]: I1002 18:31:29.670309 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-54bc95c9fb-wczsw"] Oct 02 18:31:30 crc kubenswrapper[4832]: I1002 18:31:30.222206 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:31:30 crc kubenswrapper[4832]: I1002 18:31:30.222742 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" Oct 02 18:31:31 crc kubenswrapper[4832]: I1002 18:31:31.222768 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:31:31 crc kubenswrapper[4832]: I1002 18:31:31.223715 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" Oct 02 18:31:32 crc kubenswrapper[4832]: W1002 18:31:32.681182 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c3e8b2c_bc7f_456f_b39b_a08d5333ce0c.slice/crio-a5977dee9a53ae78af430c519180159c50a6b828ce5a99013a680f39f3dad534 WatchSource:0}: Error finding container a5977dee9a53ae78af430c519180159c50a6b828ce5a99013a680f39f3dad534: Status 404 returned error can't find the container with id a5977dee9a53ae78af430c519180159c50a6b828ce5a99013a680f39f3dad534 Oct 02 18:31:32 crc kubenswrapper[4832]: W1002 18:31:32.686172 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72b2ae10_6b68_4738_9474_41a9fa1f9f92.slice/crio-e3155fb425498ff3b3c86d3324ac4310d2d654186d76bf5b8d1f99cd17b5bbeb WatchSource:0}: Error finding container e3155fb425498ff3b3c86d3324ac4310d2d654186d76bf5b8d1f99cd17b5bbeb: Status 404 returned error can't find the container with id e3155fb425498ff3b3c86d3324ac4310d2d654186d76bf5b8d1f99cd17b5bbeb Oct 02 18:31:32 crc kubenswrapper[4832]: I1002 18:31:32.957778 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp"] Oct 02 18:31:32 crc kubenswrapper[4832]: I1002 18:31:32.983697 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn"] Oct 02 18:31:32 crc kubenswrapper[4832]: W1002 18:31:32.989487 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf1d6e7f_c76f_4888_8465_3651cdd3c079.slice/crio-ae32d518d6cc39a5e87619ed4b6868cc6a27741bafa30d9753253f00f213329d WatchSource:0}: Error finding container ae32d518d6cc39a5e87619ed4b6868cc6a27741bafa30d9753253f00f213329d: Status 404 returned error can't find the container with id ae32d518d6cc39a5e87619ed4b6868cc6a27741bafa30d9753253f00f213329d Oct 02 18:31:33 crc kubenswrapper[4832]: I1002 18:31:33.460059 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" event={"ID":"4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c","Type":"ContainerStarted","Data":"a5977dee9a53ae78af430c519180159c50a6b828ce5a99013a680f39f3dad534"} Oct 02 18:31:33 crc kubenswrapper[4832]: I1002 18:31:33.461012 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" event={"ID":"cf1d6e7f-c76f-4888-8465-3651cdd3c079","Type":"ContainerStarted","Data":"ae32d518d6cc39a5e87619ed4b6868cc6a27741bafa30d9753253f00f213329d"} Oct 02 18:31:33 crc kubenswrapper[4832]: I1002 18:31:33.462336 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" event={"ID":"9e59cc84-d625-4121-956d-773c5be0d917","Type":"ContainerStarted","Data":"086a0479001fe60113f08069bf0fd32e9eea2fe075362ea214231bf0df1880b7"} Oct 02 18:31:33 crc kubenswrapper[4832]: I1002 18:31:33.462673 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:31:33 crc kubenswrapper[4832]: I1002 18:31:33.463511 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" event={"ID":"72b2ae10-6b68-4738-9474-41a9fa1f9f92","Type":"ContainerStarted","Data":"e3155fb425498ff3b3c86d3324ac4310d2d654186d76bf5b8d1f99cd17b5bbeb"} Oct 02 18:31:33 crc kubenswrapper[4832]: I1002 18:31:33.464468 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" event={"ID":"38be72b3-2875-4e11-895a-d7b229709e75","Type":"ContainerStarted","Data":"7ba57e978e36b2b595e8b3e8476a748f6bfb4aee54ce57fada234c0d4bce7756"} Oct 02 18:31:33 crc kubenswrapper[4832]: I1002 18:31:33.471726 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" Oct 02 18:31:33 crc kubenswrapper[4832]: I1002 18:31:33.487954 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-cc5f78dfc-sgrwh" podStartSLOduration=20.177877986 podStartE2EDuration="35.487931858s" podCreationTimestamp="2025-10-02 18:30:58 +0000 UTC" firstStartedPulling="2025-10-02 18:31:17.477409416 +0000 UTC m=+634.446852288" lastFinishedPulling="2025-10-02 18:31:32.787463288 +0000 UTC m=+649.756906160" observedRunningTime="2025-10-02 18:31:33.481605701 +0000 UTC m=+650.451048573" watchObservedRunningTime="2025-10-02 18:31:33.487931858 +0000 UTC m=+650.457374740" Oct 02 18:31:38 crc kubenswrapper[4832]: I1002 18:31:38.500986 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" event={"ID":"38be72b3-2875-4e11-895a-d7b229709e75","Type":"ContainerStarted","Data":"3e2d20202f95265331272706cd64eb6a913f040445e639950b3fa93c1976295a"} Oct 02 18:31:38 crc kubenswrapper[4832]: I1002 18:31:38.503303 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" event={"ID":"4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c","Type":"ContainerStarted","Data":"4da00100e6912d54867f983aab7785d12763e14332bab57050f405b5f4dca080"} Oct 02 18:31:38 crc kubenswrapper[4832]: I1002 18:31:38.506211 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" event={"ID":"cf1d6e7f-c76f-4888-8465-3651cdd3c079","Type":"ContainerStarted","Data":"57b20ddfd69114d031852df2c3ef71c2c98e25b15059f40d7d1dad39af5c056a"} Oct 02 18:31:38 crc kubenswrapper[4832]: I1002 18:31:38.508750 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" event={"ID":"72b2ae10-6b68-4738-9474-41a9fa1f9f92","Type":"ContainerStarted","Data":"64ea4ce268ac87a333306b29fd010c4b13d3d41b76391232c07bea5bbe357afd"} Oct 02 18:31:38 crc kubenswrapper[4832]: I1002 18:31:38.509436 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:31:38 crc kubenswrapper[4832]: I1002 18:31:38.547369 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn" podStartSLOduration=37.712469414 podStartE2EDuration="40.547340747s" podCreationTimestamp="2025-10-02 18:30:58 +0000 UTC" firstStartedPulling="2025-10-02 18:31:32.991726981 +0000 UTC m=+649.961169853" lastFinishedPulling="2025-10-02 18:31:35.826598314 +0000 UTC m=+652.796041186" observedRunningTime="2025-10-02 18:31:38.544406005 +0000 UTC m=+655.513848887" watchObservedRunningTime="2025-10-02 18:31:38.547340747 +0000 UTC m=+655.516783659" Oct 02 18:31:38 crc kubenswrapper[4832]: I1002 18:31:38.553846 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-7c8cf85677-b4kkp" podStartSLOduration=37.631860514 podStartE2EDuration="40.553826559s" podCreationTimestamp="2025-10-02 18:30:58 +0000 UTC" firstStartedPulling="2025-10-02 18:31:32.965009656 +0000 UTC m=+649.934452528" lastFinishedPulling="2025-10-02 18:31:35.886975701 +0000 UTC m=+652.856418573" observedRunningTime="2025-10-02 18:31:38.525722811 +0000 UTC m=+655.495165683" watchObservedRunningTime="2025-10-02 18:31:38.553826559 +0000 UTC m=+655.523269471" Oct 02 18:31:38 crc kubenswrapper[4832]: I1002 18:31:38.571681 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw" podStartSLOduration=37.465469406 podStartE2EDuration="40.571649487s" podCreationTimestamp="2025-10-02 18:30:58 +0000 UTC" firstStartedPulling="2025-10-02 18:31:32.720258718 +0000 UTC m=+649.689726991" lastFinishedPulling="2025-10-02 18:31:35.8264642 +0000 UTC m=+652.795907072" observedRunningTime="2025-10-02 18:31:38.564789762 +0000 UTC m=+655.534232644" watchObservedRunningTime="2025-10-02 18:31:38.571649487 +0000 UTC m=+655.541092399" Oct 02 18:31:38 crc kubenswrapper[4832]: I1002 18:31:38.597896 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" podStartSLOduration=37.479436232 podStartE2EDuration="40.597875016s" podCreationTimestamp="2025-10-02 18:30:58 +0000 UTC" firstStartedPulling="2025-10-02 18:31:32.710563105 +0000 UTC m=+649.680006027" lastFinishedPulling="2025-10-02 18:31:35.829001899 +0000 UTC m=+652.798444811" observedRunningTime="2025-10-02 18:31:38.594685376 +0000 UTC m=+655.564128268" watchObservedRunningTime="2025-10-02 18:31:38.597875016 +0000 UTC m=+655.567317898" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.619677 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-gmljf"] Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.621170 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-gmljf" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.622958 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-t79xk" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.623468 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.637488 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.638848 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-mm5th"] Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.640094 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-mm5th" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.642357 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-sd8sh" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.652465 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-gmljf"] Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.669483 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-mm5th"] Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.697372 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-ml6fs"] Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.698491 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-ml6fs" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.700998 4832 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-fj9fv" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.706438 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-ml6fs"] Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.753216 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zmn\" (UniqueName: \"kubernetes.io/projected/e3727292-356b-4969-bcb2-c57587cbf4a4-kube-api-access-64zmn\") pod \"cert-manager-5b446d88c5-mm5th\" (UID: \"e3727292-356b-4969-bcb2-c57587cbf4a4\") " pod="cert-manager/cert-manager-5b446d88c5-mm5th" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.753381 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsstl\" (UniqueName: \"kubernetes.io/projected/c98ef57c-5a9d-4947-a0e6-3658c7f54073-kube-api-access-tsstl\") pod \"cert-manager-webhook-5655c58dd6-ml6fs\" (UID: \"c98ef57c-5a9d-4947-a0e6-3658c7f54073\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-ml6fs" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.753427 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr6h6\" (UniqueName: \"kubernetes.io/projected/4fa1e1e0-e670-4a90-9051-76f8448e9a9f-kube-api-access-mr6h6\") pod \"cert-manager-cainjector-7f985d654d-gmljf\" (UID: \"4fa1e1e0-e670-4a90-9051-76f8448e9a9f\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-gmljf" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.855971 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsstl\" (UniqueName: \"kubernetes.io/projected/c98ef57c-5a9d-4947-a0e6-3658c7f54073-kube-api-access-tsstl\") pod \"cert-manager-webhook-5655c58dd6-ml6fs\" (UID: \"c98ef57c-5a9d-4947-a0e6-3658c7f54073\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-ml6fs" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.856069 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr6h6\" (UniqueName: \"kubernetes.io/projected/4fa1e1e0-e670-4a90-9051-76f8448e9a9f-kube-api-access-mr6h6\") pod \"cert-manager-cainjector-7f985d654d-gmljf\" (UID: \"4fa1e1e0-e670-4a90-9051-76f8448e9a9f\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-gmljf" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.856339 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64zmn\" (UniqueName: \"kubernetes.io/projected/e3727292-356b-4969-bcb2-c57587cbf4a4-kube-api-access-64zmn\") pod \"cert-manager-5b446d88c5-mm5th\" (UID: \"e3727292-356b-4969-bcb2-c57587cbf4a4\") " pod="cert-manager/cert-manager-5b446d88c5-mm5th" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.880411 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr6h6\" (UniqueName: \"kubernetes.io/projected/4fa1e1e0-e670-4a90-9051-76f8448e9a9f-kube-api-access-mr6h6\") pod \"cert-manager-cainjector-7f985d654d-gmljf\" (UID: \"4fa1e1e0-e670-4a90-9051-76f8448e9a9f\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-gmljf" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.883516 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64zmn\" (UniqueName: \"kubernetes.io/projected/e3727292-356b-4969-bcb2-c57587cbf4a4-kube-api-access-64zmn\") pod \"cert-manager-5b446d88c5-mm5th\" (UID: \"e3727292-356b-4969-bcb2-c57587cbf4a4\") " pod="cert-manager/cert-manager-5b446d88c5-mm5th" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.890469 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsstl\" (UniqueName: \"kubernetes.io/projected/c98ef57c-5a9d-4947-a0e6-3658c7f54073-kube-api-access-tsstl\") pod \"cert-manager-webhook-5655c58dd6-ml6fs\" (UID: \"c98ef57c-5a9d-4947-a0e6-3658c7f54073\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-ml6fs" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.936572 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-gmljf" Oct 02 18:31:39 crc kubenswrapper[4832]: I1002 18:31:39.956651 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-mm5th" Oct 02 18:31:40 crc kubenswrapper[4832]: I1002 18:31:40.014165 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-ml6fs" Oct 02 18:31:40 crc kubenswrapper[4832]: I1002 18:31:40.273053 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-ml6fs"] Oct 02 18:31:40 crc kubenswrapper[4832]: W1002 18:31:40.283054 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc98ef57c_5a9d_4947_a0e6_3658c7f54073.slice/crio-20048546217cb6f2d652abfd06f8beffea65c7662790768d67598b70675aa1f6 WatchSource:0}: Error finding container 20048546217cb6f2d652abfd06f8beffea65c7662790768d67598b70675aa1f6: Status 404 returned error can't find the container with id 20048546217cb6f2d652abfd06f8beffea65c7662790768d67598b70675aa1f6 Oct 02 18:31:40 crc kubenswrapper[4832]: I1002 18:31:40.367511 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-gmljf"] Oct 02 18:31:40 crc kubenswrapper[4832]: W1002 18:31:40.378432 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fa1e1e0_e670_4a90_9051_76f8448e9a9f.slice/crio-5e98534c0ef03b1f4e76c18260efd47c0d3e62a5f7962518fb455d7d24607d88 WatchSource:0}: Error finding container 5e98534c0ef03b1f4e76c18260efd47c0d3e62a5f7962518fb455d7d24607d88: Status 404 returned error can't find the container with id 5e98534c0ef03b1f4e76c18260efd47c0d3e62a5f7962518fb455d7d24607d88 Oct 02 18:31:40 crc kubenswrapper[4832]: I1002 18:31:40.443525 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-mm5th"] Oct 02 18:31:40 crc kubenswrapper[4832]: W1002 18:31:40.447822 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3727292_356b_4969_bcb2_c57587cbf4a4.slice/crio-fcbd04b6338e181ead1c5069d27b4c7053c94ddcd107946a7900eab18cc302a3 WatchSource:0}: Error finding container fcbd04b6338e181ead1c5069d27b4c7053c94ddcd107946a7900eab18cc302a3: Status 404 returned error can't find the container with id fcbd04b6338e181ead1c5069d27b4c7053c94ddcd107946a7900eab18cc302a3 Oct 02 18:31:40 crc kubenswrapper[4832]: I1002 18:31:40.520475 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-gmljf" event={"ID":"4fa1e1e0-e670-4a90-9051-76f8448e9a9f","Type":"ContainerStarted","Data":"5e98534c0ef03b1f4e76c18260efd47c0d3e62a5f7962518fb455d7d24607d88"} Oct 02 18:31:40 crc kubenswrapper[4832]: I1002 18:31:40.521497 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-ml6fs" event={"ID":"c98ef57c-5a9d-4947-a0e6-3658c7f54073","Type":"ContainerStarted","Data":"20048546217cb6f2d652abfd06f8beffea65c7662790768d67598b70675aa1f6"} Oct 02 18:31:40 crc kubenswrapper[4832]: I1002 18:31:40.522297 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-mm5th" event={"ID":"e3727292-356b-4969-bcb2-c57587cbf4a4","Type":"ContainerStarted","Data":"fcbd04b6338e181ead1c5069d27b4c7053c94ddcd107946a7900eab18cc302a3"} Oct 02 18:31:45 crc kubenswrapper[4832]: I1002 18:31:45.564773 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-gmljf" event={"ID":"4fa1e1e0-e670-4a90-9051-76f8448e9a9f","Type":"ContainerStarted","Data":"af04c35f5053fb19003ee3fccee2e55bda66b57bc7cf422d11da41104d8d4fed"} Oct 02 18:31:45 crc kubenswrapper[4832]: I1002 18:31:45.566115 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-ml6fs" event={"ID":"c98ef57c-5a9d-4947-a0e6-3658c7f54073","Type":"ContainerStarted","Data":"9db1c66e71849ccbbed706b02693831c9d18829d674e5dcb6ea9c9a851dcdd4d"} Oct 02 18:31:45 crc kubenswrapper[4832]: I1002 18:31:45.566277 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-ml6fs" Oct 02 18:31:45 crc kubenswrapper[4832]: I1002 18:31:45.567394 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-mm5th" event={"ID":"e3727292-356b-4969-bcb2-c57587cbf4a4","Type":"ContainerStarted","Data":"70d8fcaf8c34477d3c84de1e2117dd09821877ec161e9b650e7273fcf93c86ee"} Oct 02 18:31:45 crc kubenswrapper[4832]: I1002 18:31:45.580436 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-gmljf" podStartSLOduration=2.526241479 podStartE2EDuration="6.580407545s" podCreationTimestamp="2025-10-02 18:31:39 +0000 UTC" firstStartedPulling="2025-10-02 18:31:40.382809206 +0000 UTC m=+657.352252098" lastFinishedPulling="2025-10-02 18:31:44.436975252 +0000 UTC m=+661.406418164" observedRunningTime="2025-10-02 18:31:45.579664652 +0000 UTC m=+662.549107524" watchObservedRunningTime="2025-10-02 18:31:45.580407545 +0000 UTC m=+662.549850447" Oct 02 18:31:45 crc kubenswrapper[4832]: I1002 18:31:45.597381 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-ml6fs" podStartSLOduration=2.43988154 podStartE2EDuration="6.597359954s" podCreationTimestamp="2025-10-02 18:31:39 +0000 UTC" firstStartedPulling="2025-10-02 18:31:40.285192636 +0000 UTC m=+657.254635508" lastFinishedPulling="2025-10-02 18:31:44.44267101 +0000 UTC m=+661.412113922" observedRunningTime="2025-10-02 18:31:45.591840102 +0000 UTC m=+662.561282984" watchObservedRunningTime="2025-10-02 18:31:45.597359954 +0000 UTC m=+662.566802826" Oct 02 18:31:45 crc kubenswrapper[4832]: I1002 18:31:45.613664 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-mm5th" podStartSLOduration=2.524060292 podStartE2EDuration="6.613638923s" podCreationTimestamp="2025-10-02 18:31:39 +0000 UTC" firstStartedPulling="2025-10-02 18:31:40.450198643 +0000 UTC m=+657.419641525" lastFinishedPulling="2025-10-02 18:31:44.539777244 +0000 UTC m=+661.509220156" observedRunningTime="2025-10-02 18:31:45.611887298 +0000 UTC m=+662.581330200" watchObservedRunningTime="2025-10-02 18:31:45.613638923 +0000 UTC m=+662.583081825" Oct 02 18:31:49 crc kubenswrapper[4832]: I1002 18:31:49.050112 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-54bc95c9fb-wczsw" Oct 02 18:31:50 crc kubenswrapper[4832]: I1002 18:31:50.018888 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-ml6fs" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.074839 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx"] Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.076750 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.079135 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.085497 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx"] Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.131444 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96xkn\" (UniqueName: \"kubernetes.io/projected/5d0750e6-dd9d-4a96-97ec-97f9857702d9-kube-api-access-96xkn\") pod \"0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx\" (UID: \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\") " pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.131521 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d0750e6-dd9d-4a96-97ec-97f9857702d9-bundle\") pod \"0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx\" (UID: \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\") " pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.131628 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d0750e6-dd9d-4a96-97ec-97f9857702d9-util\") pod \"0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx\" (UID: \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\") " pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.218431 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2"] Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.222995 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.234878 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-bundle\") pod \"b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2\" (UID: \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\") " pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.235020 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96xkn\" (UniqueName: \"kubernetes.io/projected/5d0750e6-dd9d-4a96-97ec-97f9857702d9-kube-api-access-96xkn\") pod \"0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx\" (UID: \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\") " pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.235089 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hztm\" (UniqueName: \"kubernetes.io/projected/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-kube-api-access-4hztm\") pod \"b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2\" (UID: \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\") " pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.235117 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d0750e6-dd9d-4a96-97ec-97f9857702d9-bundle\") pod \"0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx\" (UID: \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\") " pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.235144 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d0750e6-dd9d-4a96-97ec-97f9857702d9-util\") pod \"0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx\" (UID: \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\") " pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.235293 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-util\") pod \"b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2\" (UID: \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\") " pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.237911 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d0750e6-dd9d-4a96-97ec-97f9857702d9-bundle\") pod \"0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx\" (UID: \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\") " pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.242274 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d0750e6-dd9d-4a96-97ec-97f9857702d9-util\") pod \"0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx\" (UID: \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\") " pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.249700 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2"] Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.275068 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96xkn\" (UniqueName: \"kubernetes.io/projected/5d0750e6-dd9d-4a96-97ec-97f9857702d9-kube-api-access-96xkn\") pod \"0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx\" (UID: \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\") " pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.336504 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hztm\" (UniqueName: \"kubernetes.io/projected/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-kube-api-access-4hztm\") pod \"b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2\" (UID: \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\") " pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.336624 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-util\") pod \"b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2\" (UID: \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\") " pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.336657 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-bundle\") pod \"b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2\" (UID: \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\") " pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.337233 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-bundle\") pod \"b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2\" (UID: \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\") " pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.337299 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-util\") pod \"b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2\" (UID: \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\") " pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.356103 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hztm\" (UniqueName: \"kubernetes.io/projected/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-kube-api-access-4hztm\") pod \"b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2\" (UID: \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\") " pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.422815 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.561992 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.777703 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2"] Oct 02 18:32:13 crc kubenswrapper[4832]: I1002 18:32:13.825323 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx"] Oct 02 18:32:13 crc kubenswrapper[4832]: W1002 18:32:13.844148 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d0750e6_dd9d_4a96_97ec_97f9857702d9.slice/crio-1b3f4829e68beb87458d64bcff69fbc4d4ac6fdea41cb89cd1cf5454cccb8c58 WatchSource:0}: Error finding container 1b3f4829e68beb87458d64bcff69fbc4d4ac6fdea41cb89cd1cf5454cccb8c58: Status 404 returned error can't find the container with id 1b3f4829e68beb87458d64bcff69fbc4d4ac6fdea41cb89cd1cf5454cccb8c58 Oct 02 18:32:14 crc kubenswrapper[4832]: I1002 18:32:14.781553 4832 generic.go:334] "Generic (PLEG): container finished" podID="dbdaf694-0aaf-4fd9-9e6d-01a3d8581364" containerID="9e71af12ebbcc517e8bdf32584f878e47252e74db90e5e3c74c05b882746a945" exitCode=0 Oct 02 18:32:14 crc kubenswrapper[4832]: I1002 18:32:14.781625 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" event={"ID":"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364","Type":"ContainerDied","Data":"9e71af12ebbcc517e8bdf32584f878e47252e74db90e5e3c74c05b882746a945"} Oct 02 18:32:14 crc kubenswrapper[4832]: I1002 18:32:14.782123 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" event={"ID":"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364","Type":"ContainerStarted","Data":"18cc15f18b8100ce43725dc20ca167046646e30f30573f7b144db941bfb597b6"} Oct 02 18:32:14 crc kubenswrapper[4832]: I1002 18:32:14.784919 4832 generic.go:334] "Generic (PLEG): container finished" podID="5d0750e6-dd9d-4a96-97ec-97f9857702d9" containerID="83a16d68a90855308e329547126b9b2af65a71ef34b0b4b29be37c6df624d60c" exitCode=0 Oct 02 18:32:14 crc kubenswrapper[4832]: I1002 18:32:14.784963 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" event={"ID":"5d0750e6-dd9d-4a96-97ec-97f9857702d9","Type":"ContainerDied","Data":"83a16d68a90855308e329547126b9b2af65a71ef34b0b4b29be37c6df624d60c"} Oct 02 18:32:14 crc kubenswrapper[4832]: I1002 18:32:14.785002 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" event={"ID":"5d0750e6-dd9d-4a96-97ec-97f9857702d9","Type":"ContainerStarted","Data":"1b3f4829e68beb87458d64bcff69fbc4d4ac6fdea41cb89cd1cf5454cccb8c58"} Oct 02 18:32:17 crc kubenswrapper[4832]: I1002 18:32:17.806157 4832 generic.go:334] "Generic (PLEG): container finished" podID="dbdaf694-0aaf-4fd9-9e6d-01a3d8581364" containerID="9e4b3662b79de07c26c13126234c44436ba2feac0a701885f324446ab3639af0" exitCode=0 Oct 02 18:32:17 crc kubenswrapper[4832]: I1002 18:32:17.806230 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" event={"ID":"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364","Type":"ContainerDied","Data":"9e4b3662b79de07c26c13126234c44436ba2feac0a701885f324446ab3639af0"} Oct 02 18:32:17 crc kubenswrapper[4832]: I1002 18:32:17.809358 4832 generic.go:334] "Generic (PLEG): container finished" podID="5d0750e6-dd9d-4a96-97ec-97f9857702d9" containerID="ab3747227a757f0504d99915e357715fb3c51ebdb33206d57176bb4978303596" exitCode=0 Oct 02 18:32:17 crc kubenswrapper[4832]: I1002 18:32:17.809431 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" event={"ID":"5d0750e6-dd9d-4a96-97ec-97f9857702d9","Type":"ContainerDied","Data":"ab3747227a757f0504d99915e357715fb3c51ebdb33206d57176bb4978303596"} Oct 02 18:32:18 crc kubenswrapper[4832]: I1002 18:32:18.821856 4832 generic.go:334] "Generic (PLEG): container finished" podID="dbdaf694-0aaf-4fd9-9e6d-01a3d8581364" containerID="0e69e7dea356eb006aae0ce7ec0019b4094654f644959a72f0d158dacd030a65" exitCode=0 Oct 02 18:32:18 crc kubenswrapper[4832]: I1002 18:32:18.821992 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" event={"ID":"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364","Type":"ContainerDied","Data":"0e69e7dea356eb006aae0ce7ec0019b4094654f644959a72f0d158dacd030a65"} Oct 02 18:32:18 crc kubenswrapper[4832]: I1002 18:32:18.824798 4832 generic.go:334] "Generic (PLEG): container finished" podID="5d0750e6-dd9d-4a96-97ec-97f9857702d9" containerID="f5b99ff943fbecbd137647fa82c956a29e0445dc76793ac68dbf930fd5663165" exitCode=0 Oct 02 18:32:18 crc kubenswrapper[4832]: I1002 18:32:18.824858 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" event={"ID":"5d0750e6-dd9d-4a96-97ec-97f9857702d9","Type":"ContainerDied","Data":"f5b99ff943fbecbd137647fa82c956a29e0445dc76793ac68dbf930fd5663165"} Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.197917 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.206578 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.263764 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-util\") pod \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\" (UID: \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\") " Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.263866 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d0750e6-dd9d-4a96-97ec-97f9857702d9-bundle\") pod \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\" (UID: \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\") " Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.263919 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-bundle\") pod \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\" (UID: \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\") " Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.263974 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hztm\" (UniqueName: \"kubernetes.io/projected/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-kube-api-access-4hztm\") pod \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\" (UID: \"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364\") " Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.263990 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96xkn\" (UniqueName: \"kubernetes.io/projected/5d0750e6-dd9d-4a96-97ec-97f9857702d9-kube-api-access-96xkn\") pod \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\" (UID: \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\") " Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.264024 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d0750e6-dd9d-4a96-97ec-97f9857702d9-util\") pod \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\" (UID: \"5d0750e6-dd9d-4a96-97ec-97f9857702d9\") " Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.265542 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-bundle" (OuterVolumeSpecName: "bundle") pod "dbdaf694-0aaf-4fd9-9e6d-01a3d8581364" (UID: "dbdaf694-0aaf-4fd9-9e6d-01a3d8581364"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.266240 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d0750e6-dd9d-4a96-97ec-97f9857702d9-bundle" (OuterVolumeSpecName: "bundle") pod "5d0750e6-dd9d-4a96-97ec-97f9857702d9" (UID: "5d0750e6-dd9d-4a96-97ec-97f9857702d9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.270712 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-kube-api-access-4hztm" (OuterVolumeSpecName: "kube-api-access-4hztm") pod "dbdaf694-0aaf-4fd9-9e6d-01a3d8581364" (UID: "dbdaf694-0aaf-4fd9-9e6d-01a3d8581364"). InnerVolumeSpecName "kube-api-access-4hztm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.272994 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d0750e6-dd9d-4a96-97ec-97f9857702d9-kube-api-access-96xkn" (OuterVolumeSpecName: "kube-api-access-96xkn") pod "5d0750e6-dd9d-4a96-97ec-97f9857702d9" (UID: "5d0750e6-dd9d-4a96-97ec-97f9857702d9"). InnerVolumeSpecName "kube-api-access-96xkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.365834 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d0750e6-dd9d-4a96-97ec-97f9857702d9-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.365886 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.365907 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hztm\" (UniqueName: \"kubernetes.io/projected/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-kube-api-access-4hztm\") on node \"crc\" DevicePath \"\"" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.365927 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96xkn\" (UniqueName: \"kubernetes.io/projected/5d0750e6-dd9d-4a96-97ec-97f9857702d9-kube-api-access-96xkn\") on node \"crc\" DevicePath \"\"" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.600576 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-util" (OuterVolumeSpecName: "util") pod "dbdaf694-0aaf-4fd9-9e6d-01a3d8581364" (UID: "dbdaf694-0aaf-4fd9-9e6d-01a3d8581364"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.669862 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbdaf694-0aaf-4fd9-9e6d-01a3d8581364-util\") on node \"crc\" DevicePath \"\"" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.684459 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d0750e6-dd9d-4a96-97ec-97f9857702d9-util" (OuterVolumeSpecName: "util") pod "5d0750e6-dd9d-4a96-97ec-97f9857702d9" (UID: "5d0750e6-dd9d-4a96-97ec-97f9857702d9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.771849 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d0750e6-dd9d-4a96-97ec-97f9857702d9-util\") on node \"crc\" DevicePath \"\"" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.847412 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" event={"ID":"dbdaf694-0aaf-4fd9-9e6d-01a3d8581364","Type":"ContainerDied","Data":"18cc15f18b8100ce43725dc20ca167046646e30f30573f7b144db941bfb597b6"} Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.847474 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18cc15f18b8100ce43725dc20ca167046646e30f30573f7b144db941bfb597b6" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.847488 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.853539 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" event={"ID":"5d0750e6-dd9d-4a96-97ec-97f9857702d9","Type":"ContainerDied","Data":"1b3f4829e68beb87458d64bcff69fbc4d4ac6fdea41cb89cd1cf5454cccb8c58"} Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.853602 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b3f4829e68beb87458d64bcff69fbc4d4ac6fdea41cb89cd1cf5454cccb8c58" Oct 02 18:32:20 crc kubenswrapper[4832]: I1002 18:32:20.853724 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.985692 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp"] Oct 02 18:32:29 crc kubenswrapper[4832]: E1002 18:32:29.986512 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0750e6-dd9d-4a96-97ec-97f9857702d9" containerName="pull" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.986527 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0750e6-dd9d-4a96-97ec-97f9857702d9" containerName="pull" Oct 02 18:32:29 crc kubenswrapper[4832]: E1002 18:32:29.986542 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0750e6-dd9d-4a96-97ec-97f9857702d9" containerName="extract" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.986547 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0750e6-dd9d-4a96-97ec-97f9857702d9" containerName="extract" Oct 02 18:32:29 crc kubenswrapper[4832]: E1002 18:32:29.986559 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbdaf694-0aaf-4fd9-9e6d-01a3d8581364" containerName="extract" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.986565 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbdaf694-0aaf-4fd9-9e6d-01a3d8581364" containerName="extract" Oct 02 18:32:29 crc kubenswrapper[4832]: E1002 18:32:29.986575 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbdaf694-0aaf-4fd9-9e6d-01a3d8581364" containerName="util" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.986580 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbdaf694-0aaf-4fd9-9e6d-01a3d8581364" containerName="util" Oct 02 18:32:29 crc kubenswrapper[4832]: E1002 18:32:29.986592 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbdaf694-0aaf-4fd9-9e6d-01a3d8581364" containerName="pull" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.986597 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbdaf694-0aaf-4fd9-9e6d-01a3d8581364" containerName="pull" Oct 02 18:32:29 crc kubenswrapper[4832]: E1002 18:32:29.986613 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0750e6-dd9d-4a96-97ec-97f9857702d9" containerName="util" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.986618 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0750e6-dd9d-4a96-97ec-97f9857702d9" containerName="util" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.986730 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbdaf694-0aaf-4fd9-9e6d-01a3d8581364" containerName="extract" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.986745 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d0750e6-dd9d-4a96-97ec-97f9857702d9" containerName="extract" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.987511 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.994071 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.994252 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.994885 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-9nfx5" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.994944 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.994969 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Oct 02 18:32:29 crc kubenswrapper[4832]: I1002 18:32:29.994968 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.021013 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp"] Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.116048 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.116139 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjtcc\" (UniqueName: \"kubernetes.io/projected/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-kube-api-access-sjtcc\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.116218 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-manager-config\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.116251 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-webhook-cert\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.116384 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-apiservice-cert\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.218002 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-manager-config\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.218053 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-webhook-cert\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.218112 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-apiservice-cert\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.218134 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.218175 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjtcc\" (UniqueName: \"kubernetes.io/projected/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-kube-api-access-sjtcc\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.219769 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-manager-config\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.224063 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-webhook-cert\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.224393 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.232926 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-apiservice-cert\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.246148 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjtcc\" (UniqueName: \"kubernetes.io/projected/3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a-kube-api-access-sjtcc\") pod \"loki-operator-controller-manager-69857bc6ff-kl7qp\" (UID: \"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a\") " pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.301717 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.559060 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp"] Oct 02 18:32:30 crc kubenswrapper[4832]: I1002 18:32:30.914453 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" event={"ID":"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a","Type":"ContainerStarted","Data":"31a5696d3121fe979760f51a136dbfe2f42ca6278fd9c4ac28ddd97a553ae0a5"} Oct 02 18:32:33 crc kubenswrapper[4832]: I1002 18:32:33.235831 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-8958c8b87-4cq9h"] Oct 02 18:32:33 crc kubenswrapper[4832]: I1002 18:32:33.242413 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-8958c8b87-4cq9h"] Oct 02 18:32:33 crc kubenswrapper[4832]: I1002 18:32:33.242526 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-8958c8b87-4cq9h" Oct 02 18:32:33 crc kubenswrapper[4832]: I1002 18:32:33.247687 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Oct 02 18:32:33 crc kubenswrapper[4832]: I1002 18:32:33.247715 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Oct 02 18:32:33 crc kubenswrapper[4832]: I1002 18:32:33.247987 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-bj4jz" Oct 02 18:32:33 crc kubenswrapper[4832]: I1002 18:32:33.384225 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hzjh\" (UniqueName: \"kubernetes.io/projected/901f1678-8c0e-437e-bbb0-ba98d72c5aed-kube-api-access-7hzjh\") pod \"cluster-logging-operator-8958c8b87-4cq9h\" (UID: \"901f1678-8c0e-437e-bbb0-ba98d72c5aed\") " pod="openshift-logging/cluster-logging-operator-8958c8b87-4cq9h" Oct 02 18:32:33 crc kubenswrapper[4832]: I1002 18:32:33.485870 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hzjh\" (UniqueName: \"kubernetes.io/projected/901f1678-8c0e-437e-bbb0-ba98d72c5aed-kube-api-access-7hzjh\") pod \"cluster-logging-operator-8958c8b87-4cq9h\" (UID: \"901f1678-8c0e-437e-bbb0-ba98d72c5aed\") " pod="openshift-logging/cluster-logging-operator-8958c8b87-4cq9h" Oct 02 18:32:33 crc kubenswrapper[4832]: I1002 18:32:33.503851 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hzjh\" (UniqueName: \"kubernetes.io/projected/901f1678-8c0e-437e-bbb0-ba98d72c5aed-kube-api-access-7hzjh\") pod \"cluster-logging-operator-8958c8b87-4cq9h\" (UID: \"901f1678-8c0e-437e-bbb0-ba98d72c5aed\") " pod="openshift-logging/cluster-logging-operator-8958c8b87-4cq9h" Oct 02 18:32:33 crc kubenswrapper[4832]: I1002 18:32:33.572513 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-8958c8b87-4cq9h" Oct 02 18:32:35 crc kubenswrapper[4832]: I1002 18:32:35.911681 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-8958c8b87-4cq9h"] Oct 02 18:32:35 crc kubenswrapper[4832]: I1002 18:32:35.948804 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" event={"ID":"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a","Type":"ContainerStarted","Data":"0cbfa6ff193c045b0bffcc20cf077004fca19ac66a73eb5043cfb3f59c6f2244"} Oct 02 18:32:35 crc kubenswrapper[4832]: I1002 18:32:35.950322 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-8958c8b87-4cq9h" event={"ID":"901f1678-8c0e-437e-bbb0-ba98d72c5aed","Type":"ContainerStarted","Data":"282e331458906a458d5ab6394d5a2432fd1b34c11733b1e36517439de8f9e056"} Oct 02 18:32:46 crc kubenswrapper[4832]: I1002 18:32:46.019410 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" event={"ID":"3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a","Type":"ContainerStarted","Data":"a0dfc4f20e3d2965108697b176d22d198fb3917761a00469d8273837c5bea520"} Oct 02 18:32:46 crc kubenswrapper[4832]: I1002 18:32:46.021311 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-8958c8b87-4cq9h" event={"ID":"901f1678-8c0e-437e-bbb0-ba98d72c5aed","Type":"ContainerStarted","Data":"db74391beb196d32c85981de427808063e5bc474b21debc5076e51ba58dedf3d"} Oct 02 18:32:47 crc kubenswrapper[4832]: I1002 18:32:47.028709 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:47 crc kubenswrapper[4832]: I1002 18:32:47.033632 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" Oct 02 18:32:47 crc kubenswrapper[4832]: I1002 18:32:47.067073 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-69857bc6ff-kl7qp" podStartSLOduration=2.96547744 podStartE2EDuration="18.067044266s" podCreationTimestamp="2025-10-02 18:32:29 +0000 UTC" firstStartedPulling="2025-10-02 18:32:30.583719518 +0000 UTC m=+707.553162390" lastFinishedPulling="2025-10-02 18:32:45.685286344 +0000 UTC m=+722.654729216" observedRunningTime="2025-10-02 18:32:47.063940029 +0000 UTC m=+724.033382941" watchObservedRunningTime="2025-10-02 18:32:47.067044266 +0000 UTC m=+724.036487188" Oct 02 18:32:47 crc kubenswrapper[4832]: I1002 18:32:47.100952 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-8958c8b87-4cq9h" podStartSLOduration=4.387038597 podStartE2EDuration="14.100923564s" podCreationTimestamp="2025-10-02 18:32:33 +0000 UTC" firstStartedPulling="2025-10-02 18:32:35.926757202 +0000 UTC m=+712.896200084" lastFinishedPulling="2025-10-02 18:32:45.640642189 +0000 UTC m=+722.610085051" observedRunningTime="2025-10-02 18:32:47.093933676 +0000 UTC m=+724.063376588" watchObservedRunningTime="2025-10-02 18:32:47.100923564 +0000 UTC m=+724.070366466" Oct 02 18:32:53 crc kubenswrapper[4832]: I1002 18:32:53.489742 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Oct 02 18:32:53 crc kubenswrapper[4832]: I1002 18:32:53.491670 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Oct 02 18:32:53 crc kubenswrapper[4832]: I1002 18:32:53.494423 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Oct 02 18:32:53 crc kubenswrapper[4832]: I1002 18:32:53.494683 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Oct 02 18:32:53 crc kubenswrapper[4832]: I1002 18:32:53.506333 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Oct 02 18:32:53 crc kubenswrapper[4832]: I1002 18:32:53.605922 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5dtp\" (UniqueName: \"kubernetes.io/projected/104d0a3f-8e8d-4c82-bff9-eed8ff8bdd66-kube-api-access-q5dtp\") pod \"minio\" (UID: \"104d0a3f-8e8d-4c82-bff9-eed8ff8bdd66\") " pod="minio-dev/minio" Oct 02 18:32:53 crc kubenswrapper[4832]: I1002 18:32:53.606162 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c2dccfa3-3c73-4c0a-81bb-81f22b7706f9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2dccfa3-3c73-4c0a-81bb-81f22b7706f9\") pod \"minio\" (UID: \"104d0a3f-8e8d-4c82-bff9-eed8ff8bdd66\") " pod="minio-dev/minio" Oct 02 18:32:53 crc kubenswrapper[4832]: I1002 18:32:53.707288 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5dtp\" (UniqueName: \"kubernetes.io/projected/104d0a3f-8e8d-4c82-bff9-eed8ff8bdd66-kube-api-access-q5dtp\") pod \"minio\" (UID: \"104d0a3f-8e8d-4c82-bff9-eed8ff8bdd66\") " pod="minio-dev/minio" Oct 02 18:32:53 crc kubenswrapper[4832]: I1002 18:32:53.707644 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c2dccfa3-3c73-4c0a-81bb-81f22b7706f9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2dccfa3-3c73-4c0a-81bb-81f22b7706f9\") pod \"minio\" (UID: \"104d0a3f-8e8d-4c82-bff9-eed8ff8bdd66\") " pod="minio-dev/minio" Oct 02 18:32:53 crc kubenswrapper[4832]: I1002 18:32:53.713123 4832 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 02 18:32:53 crc kubenswrapper[4832]: I1002 18:32:53.713198 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c2dccfa3-3c73-4c0a-81bb-81f22b7706f9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2dccfa3-3c73-4c0a-81bb-81f22b7706f9\") pod \"minio\" (UID: \"104d0a3f-8e8d-4c82-bff9-eed8ff8bdd66\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0865e1c4a7b12d3af0668c1ca3c734fdf221369f9c67f166615f51d022459ca7/globalmount\"" pod="minio-dev/minio" Oct 02 18:32:53 crc kubenswrapper[4832]: I1002 18:32:53.735351 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5dtp\" (UniqueName: \"kubernetes.io/projected/104d0a3f-8e8d-4c82-bff9-eed8ff8bdd66-kube-api-access-q5dtp\") pod \"minio\" (UID: \"104d0a3f-8e8d-4c82-bff9-eed8ff8bdd66\") " pod="minio-dev/minio" Oct 02 18:32:53 crc kubenswrapper[4832]: I1002 18:32:53.745149 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c2dccfa3-3c73-4c0a-81bb-81f22b7706f9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c2dccfa3-3c73-4c0a-81bb-81f22b7706f9\") pod \"minio\" (UID: \"104d0a3f-8e8d-4c82-bff9-eed8ff8bdd66\") " pod="minio-dev/minio" Oct 02 18:32:53 crc kubenswrapper[4832]: I1002 18:32:53.821110 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Oct 02 18:32:54 crc kubenswrapper[4832]: I1002 18:32:54.056617 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Oct 02 18:32:54 crc kubenswrapper[4832]: I1002 18:32:54.076672 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"104d0a3f-8e8d-4c82-bff9-eed8ff8bdd66","Type":"ContainerStarted","Data":"b330445f2c468ff9633222493466427cc77f44f948c85efbbf8668e1c6503c1b"} Oct 02 18:32:58 crc kubenswrapper[4832]: I1002 18:32:58.111120 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"104d0a3f-8e8d-4c82-bff9-eed8ff8bdd66","Type":"ContainerStarted","Data":"0f842c610905ec76a43a04cd72cb375d17f9046ea81e59e3f1a8e22bba06aebe"} Oct 02 18:32:58 crc kubenswrapper[4832]: I1002 18:32:58.125115 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=5.035688442 podStartE2EDuration="8.125094109s" podCreationTimestamp="2025-10-02 18:32:50 +0000 UTC" firstStartedPulling="2025-10-02 18:32:54.063577963 +0000 UTC m=+731.033020855" lastFinishedPulling="2025-10-02 18:32:57.15298364 +0000 UTC m=+734.122426522" observedRunningTime="2025-10-02 18:32:58.123364765 +0000 UTC m=+735.092807637" watchObservedRunningTime="2025-10-02 18:32:58.125094109 +0000 UTC m=+735.094536981" Oct 02 18:33:01 crc kubenswrapper[4832]: I1002 18:33:01.902707 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf"] Oct 02 18:33:01 crc kubenswrapper[4832]: I1002 18:33:01.904031 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:01 crc kubenswrapper[4832]: I1002 18:33:01.909567 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Oct 02 18:33:01 crc kubenswrapper[4832]: I1002 18:33:01.910196 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-7h2t4" Oct 02 18:33:01 crc kubenswrapper[4832]: I1002 18:33:01.911110 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Oct 02 18:33:01 crc kubenswrapper[4832]: I1002 18:33:01.911360 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Oct 02 18:33:01 crc kubenswrapper[4832]: I1002 18:33:01.920801 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Oct 02 18:33:01 crc kubenswrapper[4832]: I1002 18:33:01.925505 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf"] Oct 02 18:33:01 crc kubenswrapper[4832]: I1002 18:33:01.939119 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/0ac07716-7573-4264-9530-b6dd1ea4ce14-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:01 crc kubenswrapper[4832]: I1002 18:33:01.939313 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ac07716-7573-4264-9530-b6dd1ea4ce14-logging-loki-ca-bundle\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:01 crc kubenswrapper[4832]: I1002 18:33:01.939352 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqzjd\" (UniqueName: \"kubernetes.io/projected/0ac07716-7573-4264-9530-b6dd1ea4ce14-kube-api-access-sqzjd\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:01 crc kubenswrapper[4832]: I1002 18:33:01.939381 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/0ac07716-7573-4264-9530-b6dd1ea4ce14-logging-loki-distributor-http\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:01 crc kubenswrapper[4832]: I1002 18:33:01.939463 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac07716-7573-4264-9530-b6dd1ea4ce14-config\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.041436 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac07716-7573-4264-9530-b6dd1ea4ce14-config\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.041518 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/0ac07716-7573-4264-9530-b6dd1ea4ce14-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.041572 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ac07716-7573-4264-9530-b6dd1ea4ce14-logging-loki-ca-bundle\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.041595 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqzjd\" (UniqueName: \"kubernetes.io/projected/0ac07716-7573-4264-9530-b6dd1ea4ce14-kube-api-access-sqzjd\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.041614 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/0ac07716-7573-4264-9530-b6dd1ea4ce14-logging-loki-distributor-http\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.044050 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ac07716-7573-4264-9530-b6dd1ea4ce14-logging-loki-ca-bundle\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.045027 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ac07716-7573-4264-9530-b6dd1ea4ce14-config\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.056434 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/0ac07716-7573-4264-9530-b6dd1ea4ce14-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.069961 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/0ac07716-7573-4264-9530-b6dd1ea4ce14-logging-loki-distributor-http\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.077955 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqzjd\" (UniqueName: \"kubernetes.io/projected/0ac07716-7573-4264-9530-b6dd1ea4ce14-kube-api-access-sqzjd\") pod \"logging-loki-distributor-6f5f7fff97-x5gwf\" (UID: \"0ac07716-7573-4264-9530-b6dd1ea4ce14\") " pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.090692 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5d954896cf-g55np"] Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.091695 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.095040 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.095291 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.101808 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.103039 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5d954896cf-g55np"] Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.144613 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64a6d330-84e2-4071-9345-a5dd8496940a-logging-loki-ca-bundle\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.144684 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/64a6d330-84e2-4071-9345-a5dd8496940a-logging-loki-s3\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.144730 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64a6d330-84e2-4071-9345-a5dd8496940a-config\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.144762 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/64a6d330-84e2-4071-9345-a5dd8496940a-logging-loki-querier-grpc\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.144824 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/64a6d330-84e2-4071-9345-a5dd8496940a-logging-loki-querier-http\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.144860 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz7v9\" (UniqueName: \"kubernetes.io/projected/64a6d330-84e2-4071-9345-a5dd8496940a-kube-api-access-kz7v9\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.158562 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2"] Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.159893 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.163817 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.164041 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.186757 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2"] Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.232727 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.245725 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64a6d330-84e2-4071-9345-a5dd8496940a-config\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.245774 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qj2m\" (UniqueName: \"kubernetes.io/projected/45f6d92a-3f93-4a09-8ed0-74ad13440476-kube-api-access-8qj2m\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.245799 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/64a6d330-84e2-4071-9345-a5dd8496940a-logging-loki-querier-grpc\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.245843 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/64a6d330-84e2-4071-9345-a5dd8496940a-logging-loki-querier-http\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.245862 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/45f6d92a-3f93-4a09-8ed0-74ad13440476-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.245882 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz7v9\" (UniqueName: \"kubernetes.io/projected/64a6d330-84e2-4071-9345-a5dd8496940a-kube-api-access-kz7v9\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.245912 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/45f6d92a-3f93-4a09-8ed0-74ad13440476-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.245951 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45f6d92a-3f93-4a09-8ed0-74ad13440476-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.245970 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64a6d330-84e2-4071-9345-a5dd8496940a-logging-loki-ca-bundle\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.246001 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/64a6d330-84e2-4071-9345-a5dd8496940a-logging-loki-s3\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.246020 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45f6d92a-3f93-4a09-8ed0-74ad13440476-config\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.247456 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64a6d330-84e2-4071-9345-a5dd8496940a-config\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.247959 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64a6d330-84e2-4071-9345-a5dd8496940a-logging-loki-ca-bundle\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.256033 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/64a6d330-84e2-4071-9345-a5dd8496940a-logging-loki-querier-http\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.257275 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/64a6d330-84e2-4071-9345-a5dd8496940a-logging-loki-s3\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.262369 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/64a6d330-84e2-4071-9345-a5dd8496940a-logging-loki-querier-grpc\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.278972 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz7v9\" (UniqueName: \"kubernetes.io/projected/64a6d330-84e2-4071-9345-a5dd8496940a-kube-api-access-kz7v9\") pod \"logging-loki-querier-5d954896cf-g55np\" (UID: \"64a6d330-84e2-4071-9345-a5dd8496940a\") " pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.291439 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc"] Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.293491 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.297408 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.297664 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.297765 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.297539 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-jpzm2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.297851 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.298066 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.303248 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx"] Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.304283 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.317652 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx"] Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.321552 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc"] Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.350629 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-tls-secret\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.350714 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45f6d92a-3f93-4a09-8ed0-74ad13440476-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.350743 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/242ef9e3-c339-468a-b6a7-298dfab16a59-tls-secret\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.350774 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/242ef9e3-c339-468a-b6a7-298dfab16a59-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.350806 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45f6d92a-3f93-4a09-8ed0-74ad13440476-config\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.350853 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qj2m\" (UniqueName: \"kubernetes.io/projected/45f6d92a-3f93-4a09-8ed0-74ad13440476-kube-api-access-8qj2m\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.350895 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-tenants\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.350920 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/242ef9e3-c339-468a-b6a7-298dfab16a59-rbac\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.350948 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/242ef9e3-c339-468a-b6a7-298dfab16a59-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.350986 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-rbac\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.351017 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.351040 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-lokistack-gateway\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.351084 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hctzv\" (UniqueName: \"kubernetes.io/projected/242ef9e3-c339-468a-b6a7-298dfab16a59-kube-api-access-hctzv\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.351120 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/242ef9e3-c339-468a-b6a7-298dfab16a59-lokistack-gateway\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.351148 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/45f6d92a-3f93-4a09-8ed0-74ad13440476-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.351202 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.351229 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjvx6\" (UniqueName: \"kubernetes.io/projected/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-kube-api-access-xjvx6\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.351257 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/242ef9e3-c339-468a-b6a7-298dfab16a59-tenants\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.351308 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/45f6d92a-3f93-4a09-8ed0-74ad13440476-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.351333 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/242ef9e3-c339-468a-b6a7-298dfab16a59-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.351386 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.353335 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45f6d92a-3f93-4a09-8ed0-74ad13440476-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.354007 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45f6d92a-3f93-4a09-8ed0-74ad13440476-config\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.363798 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/45f6d92a-3f93-4a09-8ed0-74ad13440476-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.373952 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/45f6d92a-3f93-4a09-8ed0-74ad13440476-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.375874 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qj2m\" (UniqueName: \"kubernetes.io/projected/45f6d92a-3f93-4a09-8ed0-74ad13440476-kube-api-access-8qj2m\") pod \"logging-loki-query-frontend-6fbbbc8b7d-kqxp2\" (UID: \"45f6d92a-3f93-4a09-8ed0-74ad13440476\") " pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.422653 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453256 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-tenants\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453310 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/242ef9e3-c339-468a-b6a7-298dfab16a59-rbac\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453332 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/242ef9e3-c339-468a-b6a7-298dfab16a59-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453346 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-rbac\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453364 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453380 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-lokistack-gateway\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453408 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hctzv\" (UniqueName: \"kubernetes.io/projected/242ef9e3-c339-468a-b6a7-298dfab16a59-kube-api-access-hctzv\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453433 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/242ef9e3-c339-468a-b6a7-298dfab16a59-lokistack-gateway\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453460 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453501 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjvx6\" (UniqueName: \"kubernetes.io/projected/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-kube-api-access-xjvx6\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453522 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/242ef9e3-c339-468a-b6a7-298dfab16a59-tenants\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453541 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/242ef9e3-c339-468a-b6a7-298dfab16a59-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453560 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453582 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-tls-secret\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453609 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/242ef9e3-c339-468a-b6a7-298dfab16a59-tls-secret\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.453627 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/242ef9e3-c339-468a-b6a7-298dfab16a59-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.454678 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/242ef9e3-c339-468a-b6a7-298dfab16a59-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.454919 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.454934 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/242ef9e3-c339-468a-b6a7-298dfab16a59-rbac\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.459821 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-tenants\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.460885 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-lokistack-gateway\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.461187 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/242ef9e3-c339-468a-b6a7-298dfab16a59-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.461681 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-rbac\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.461744 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.462655 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.462879 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/242ef9e3-c339-468a-b6a7-298dfab16a59-logging-loki-ca-bundle\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.463324 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/242ef9e3-c339-468a-b6a7-298dfab16a59-lokistack-gateway\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.464985 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/242ef9e3-c339-468a-b6a7-298dfab16a59-tls-secret\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.465012 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-tls-secret\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.471359 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/242ef9e3-c339-468a-b6a7-298dfab16a59-tenants\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.473916 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjvx6\" (UniqueName: \"kubernetes.io/projected/aa1f7d1a-2f01-4d70-b78c-0b28692ce57c-kube-api-access-xjvx6\") pod \"logging-loki-gateway-6f7dfcd5dd-w48kx\" (UID: \"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.483399 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hctzv\" (UniqueName: \"kubernetes.io/projected/242ef9e3-c339-468a-b6a7-298dfab16a59-kube-api-access-hctzv\") pod \"logging-loki-gateway-6f7dfcd5dd-w4hsc\" (UID: \"242ef9e3-c339-468a-b6a7-298dfab16a59\") " pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.487887 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.624066 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.633219 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.778736 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf"] Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.804297 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2"] Oct 02 18:33:02 crc kubenswrapper[4832]: W1002 18:33:02.900122 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64a6d330_84e2_4071_9345_a5dd8496940a.slice/crio-62cbc69cb7e3ccc48c59a0d437fae902d06695720bece5c133015e645ea39129 WatchSource:0}: Error finding container 62cbc69cb7e3ccc48c59a0d437fae902d06695720bece5c133015e645ea39129: Status 404 returned error can't find the container with id 62cbc69cb7e3ccc48c59a0d437fae902d06695720bece5c133015e645ea39129 Oct 02 18:33:02 crc kubenswrapper[4832]: I1002 18:33:02.906771 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5d954896cf-g55np"] Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.056619 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.058254 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.062878 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.063579 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.071240 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.140437 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx"] Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.148957 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.150807 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.153758 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.154174 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.155371 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.155413 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" event={"ID":"64a6d330-84e2-4071-9345-a5dd8496940a","Type":"ContainerStarted","Data":"62cbc69cb7e3ccc48c59a0d437fae902d06695720bece5c133015e645ea39129"} Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.157538 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" event={"ID":"45f6d92a-3f93-4a09-8ed0-74ad13440476","Type":"ContainerStarted","Data":"cf255adc7632f7ba608cfe3d299c54d5ac7baaaae69103f6db5880b42682c223"} Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.158989 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" event={"ID":"0ac07716-7573-4264-9530-b6dd1ea4ce14","Type":"ContainerStarted","Data":"e3667f991c455717e3bd1a1fcb7cbfe5a3f478dd158c703ae96e1c70b436447c"} Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.160129 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" event={"ID":"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c","Type":"ContainerStarted","Data":"6398b752c5ece948e83184b280f939b4a87580421793b25e57ede12a137f56d0"} Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.173409 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/757a6407-20f2-4b69-816a-6b01c7e5cc79-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.173467 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/757a6407-20f2-4b69-816a-6b01c7e5cc79-config\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.173513 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/757a6407-20f2-4b69-816a-6b01c7e5cc79-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.173557 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/757a6407-20f2-4b69-816a-6b01c7e5cc79-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.173605 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frsjh\" (UniqueName: \"kubernetes.io/projected/757a6407-20f2-4b69-816a-6b01c7e5cc79-kube-api-access-frsjh\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.173627 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/757a6407-20f2-4b69-816a-6b01c7e5cc79-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.173747 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e027c5a7-40ce-4f2e-86b5-3f97f61fba12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e027c5a7-40ce-4f2e-86b5-3f97f61fba12\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.173795 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2ba7d380-c086-459e-acaf-4b16f2c4af75\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba7d380-c086-459e-acaf-4b16f2c4af75\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.177869 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc"] Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.206939 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.207895 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.210599 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.210731 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.219830 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.276229 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f9e90314-ee9d-4851-bfb7-f7c45f32325a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f9e90314-ee9d-4851-bfb7-f7c45f32325a\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.308361 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz25b\" (UniqueName: \"kubernetes.io/projected/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-kube-api-access-zz25b\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.308499 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e027c5a7-40ce-4f2e-86b5-3f97f61fba12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e027c5a7-40ce-4f2e-86b5-3f97f61fba12\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.308526 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2ba7d380-c086-459e-acaf-4b16f2c4af75\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba7d380-c086-459e-acaf-4b16f2c4af75\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.308553 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/757a6407-20f2-4b69-816a-6b01c7e5cc79-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.308588 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/757a6407-20f2-4b69-816a-6b01c7e5cc79-config\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.308636 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.308656 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/757a6407-20f2-4b69-816a-6b01c7e5cc79-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.308678 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.308715 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.308749 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/757a6407-20f2-4b69-816a-6b01c7e5cc79-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.308789 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-config\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.308825 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frsjh\" (UniqueName: \"kubernetes.io/projected/757a6407-20f2-4b69-816a-6b01c7e5cc79-kube-api-access-frsjh\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.308846 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/757a6407-20f2-4b69-816a-6b01c7e5cc79-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.308866 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.312311 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/757a6407-20f2-4b69-816a-6b01c7e5cc79-config\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.313173 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/757a6407-20f2-4b69-816a-6b01c7e5cc79-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.321544 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/757a6407-20f2-4b69-816a-6b01c7e5cc79-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.325877 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/757a6407-20f2-4b69-816a-6b01c7e5cc79-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.326947 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/757a6407-20f2-4b69-816a-6b01c7e5cc79-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.348068 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frsjh\" (UniqueName: \"kubernetes.io/projected/757a6407-20f2-4b69-816a-6b01c7e5cc79-kube-api-access-frsjh\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.355095 4832 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.355141 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e027c5a7-40ce-4f2e-86b5-3f97f61fba12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e027c5a7-40ce-4f2e-86b5-3f97f61fba12\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6f43b623f1916756fdbcbaad8d800cf08a02bf75af013c0e2ba8f7e72dc2f607/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.355680 4832 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.355702 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2ba7d380-c086-459e-acaf-4b16f2c4af75\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba7d380-c086-459e-acaf-4b16f2c4af75\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3343ad9e3cc847d026ad98eb958b45e365a697d1aa308133009a1f49bb3dd4e2/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.392055 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2ba7d380-c086-459e-acaf-4b16f2c4af75\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2ba7d380-c086-459e-acaf-4b16f2c4af75\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.395132 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e027c5a7-40ce-4f2e-86b5-3f97f61fba12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e027c5a7-40ce-4f2e-86b5-3f97f61fba12\") pod \"logging-loki-ingester-0\" (UID: \"757a6407-20f2-4b69-816a-6b01c7e5cc79\") " pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.410598 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a6a49601-8595-459e-b680-391e7b597054-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.410631 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.410651 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.410679 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-config\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.410730 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/a6a49601-8595-459e-b680-391e7b597054-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.410748 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a49601-8595-459e-b680-391e7b597054-config\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.410768 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.410787 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4cc63b07-1901-456c-a232-b847760c9dc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4cc63b07-1901-456c-a232-b847760c9dc9\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.410809 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/a6a49601-8595-459e-b680-391e7b597054-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.410831 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.410861 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz25b\" (UniqueName: \"kubernetes.io/projected/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-kube-api-access-zz25b\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.410880 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f9e90314-ee9d-4851-bfb7-f7c45f32325a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f9e90314-ee9d-4851-bfb7-f7c45f32325a\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.410909 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm4lg\" (UniqueName: \"kubernetes.io/projected/a6a49601-8595-459e-b680-391e7b597054-kube-api-access-vm4lg\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.410925 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6a49601-8595-459e-b680-391e7b597054-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.412612 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-config\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.412885 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.413823 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.414354 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.414391 4832 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.414516 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f9e90314-ee9d-4851-bfb7-f7c45f32325a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f9e90314-ee9d-4851-bfb7-f7c45f32325a\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f13227de7e1210398fd046ac27e7ebf23463d0d545d0647aa73bff604e91acd4/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.416827 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.424824 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.426866 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz25b\" (UniqueName: \"kubernetes.io/projected/9c643836-b5c3-48dc-8b08-f8a5bcbea2c7-kube-api-access-zz25b\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.447414 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f9e90314-ee9d-4851-bfb7-f7c45f32325a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f9e90314-ee9d-4851-bfb7-f7c45f32325a\") pod \"logging-loki-index-gateway-0\" (UID: \"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7\") " pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.512842 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/a6a49601-8595-459e-b680-391e7b597054-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.512891 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a49601-8595-459e-b680-391e7b597054-config\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.512922 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4cc63b07-1901-456c-a232-b847760c9dc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4cc63b07-1901-456c-a232-b847760c9dc9\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.512942 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/a6a49601-8595-459e-b680-391e7b597054-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.512996 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm4lg\" (UniqueName: \"kubernetes.io/projected/a6a49601-8595-459e-b680-391e7b597054-kube-api-access-vm4lg\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.513016 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6a49601-8595-459e-b680-391e7b597054-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.513033 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a6a49601-8595-459e-b680-391e7b597054-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.530177 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/a6a49601-8595-459e-b680-391e7b597054-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.530861 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6a49601-8595-459e-b680-391e7b597054-config\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.533921 4832 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.533960 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4cc63b07-1901-456c-a232-b847760c9dc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4cc63b07-1901-456c-a232-b847760c9dc9\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/16ce482f4aaa132dc5628fb20d7a61100f6296eadda6829856f2effdd57acc9e/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.534020 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6a49601-8595-459e-b680-391e7b597054-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.534148 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a6a49601-8595-459e-b680-391e7b597054-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.534392 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/a6a49601-8595-459e-b680-391e7b597054-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.544010 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.547934 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm4lg\" (UniqueName: \"kubernetes.io/projected/a6a49601-8595-459e-b680-391e7b597054-kube-api-access-vm4lg\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.577154 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4cc63b07-1901-456c-a232-b847760c9dc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4cc63b07-1901-456c-a232-b847760c9dc9\") pod \"logging-loki-compactor-0\" (UID: \"a6a49601-8595-459e-b680-391e7b597054\") " pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.787936 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.884278 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Oct 02 18:33:03 crc kubenswrapper[4832]: W1002 18:33:03.897630 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod757a6407_20f2_4b69_816a_6b01c7e5cc79.slice/crio-4297bb8986ec19e4b950188f3a0e412ca98aae8ae6225f28a9b655b1b9de42cf WatchSource:0}: Error finding container 4297bb8986ec19e4b950188f3a0e412ca98aae8ae6225f28a9b655b1b9de42cf: Status 404 returned error can't find the container with id 4297bb8986ec19e4b950188f3a0e412ca98aae8ae6225f28a9b655b1b9de42cf Oct 02 18:33:03 crc kubenswrapper[4832]: I1002 18:33:03.980294 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Oct 02 18:33:04 crc kubenswrapper[4832]: I1002 18:33:04.173773 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" event={"ID":"242ef9e3-c339-468a-b6a7-298dfab16a59","Type":"ContainerStarted","Data":"1e8eb0fbfdebb218bd1253084936ee733c063d1e6d12f0d889b4cdb5932e1d81"} Oct 02 18:33:04 crc kubenswrapper[4832]: I1002 18:33:04.175508 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7","Type":"ContainerStarted","Data":"78da199a36fb053c5e2e6bae47f7ca2bf73060fff0df8beadbe6ea2fd2e84d6d"} Oct 02 18:33:04 crc kubenswrapper[4832]: I1002 18:33:04.177218 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"757a6407-20f2-4b69-816a-6b01c7e5cc79","Type":"ContainerStarted","Data":"4297bb8986ec19e4b950188f3a0e412ca98aae8ae6225f28a9b655b1b9de42cf"} Oct 02 18:33:04 crc kubenswrapper[4832]: I1002 18:33:04.307849 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Oct 02 18:33:04 crc kubenswrapper[4832]: W1002 18:33:04.336641 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6a49601_8595_459e_b680_391e7b597054.slice/crio-ed5fc7dacd8601204ad98970b388096351696a268e6b0ee1ad6f131e763012ef WatchSource:0}: Error finding container ed5fc7dacd8601204ad98970b388096351696a268e6b0ee1ad6f131e763012ef: Status 404 returned error can't find the container with id ed5fc7dacd8601204ad98970b388096351696a268e6b0ee1ad6f131e763012ef Oct 02 18:33:05 crc kubenswrapper[4832]: I1002 18:33:05.192689 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"a6a49601-8595-459e-b680-391e7b597054","Type":"ContainerStarted","Data":"ed5fc7dacd8601204ad98970b388096351696a268e6b0ee1ad6f131e763012ef"} Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.210992 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" event={"ID":"0ac07716-7573-4264-9530-b6dd1ea4ce14","Type":"ContainerStarted","Data":"08432599f4cad32559180a0580f6fbdb4f49a8db416f362c18e323b9a6624716"} Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.211486 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.213909 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" event={"ID":"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c","Type":"ContainerStarted","Data":"63f711afc8db23e2bd48c85de5aefc82fca8bebea0036ffc21a42e7e21225221"} Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.215678 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" event={"ID":"242ef9e3-c339-468a-b6a7-298dfab16a59","Type":"ContainerStarted","Data":"f4a999c1df5acd6fece2f454d3d6544c16d935fd69c73fe7c23946a8a2b7a7ea"} Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.217215 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"9c643836-b5c3-48dc-8b08-f8a5bcbea2c7","Type":"ContainerStarted","Data":"3ab455ced446138469c2d10fdef57cf31a9047e5fde486be441462fb2f335ee2"} Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.217348 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.219789 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" event={"ID":"64a6d330-84e2-4071-9345-a5dd8496940a","Type":"ContainerStarted","Data":"6e540287d4550b7ac08bfbc8b24bbe02a94c41a52e99df2be0ad2e25316c02ed"} Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.219894 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.221307 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"757a6407-20f2-4b69-816a-6b01c7e5cc79","Type":"ContainerStarted","Data":"8b19a6346e85e8a1815a79af5d7f5fc246a565d9d02ed0d31012b1ee3a9e2a19"} Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.221376 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.229766 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.229804 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"a6a49601-8595-459e-b680-391e7b597054","Type":"ContainerStarted","Data":"d71fb420be903c697fa12eebcb69adb192c3ad9fb3c246164ff98d53e3ad62d9"} Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.229820 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.229828 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" event={"ID":"45f6d92a-3f93-4a09-8ed0-74ad13440476","Type":"ContainerStarted","Data":"9b7e850e6abd30e8bc4515db7b2d14f9507b3c1603bec480ab15c8a9756da9ba"} Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.238418 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" podStartSLOduration=2.429483495 podStartE2EDuration="6.238390117s" podCreationTimestamp="2025-10-02 18:33:01 +0000 UTC" firstStartedPulling="2025-10-02 18:33:02.803848504 +0000 UTC m=+739.773291376" lastFinishedPulling="2025-10-02 18:33:06.612755086 +0000 UTC m=+743.582197998" observedRunningTime="2025-10-02 18:33:07.232102301 +0000 UTC m=+744.201545193" watchObservedRunningTime="2025-10-02 18:33:07.238390117 +0000 UTC m=+744.207832989" Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.255218 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=2.47279345 podStartE2EDuration="5.255202402s" podCreationTimestamp="2025-10-02 18:33:02 +0000 UTC" firstStartedPulling="2025-10-02 18:33:03.901683473 +0000 UTC m=+740.871126345" lastFinishedPulling="2025-10-02 18:33:06.684092425 +0000 UTC m=+743.653535297" observedRunningTime="2025-10-02 18:33:07.254836231 +0000 UTC m=+744.224279143" watchObservedRunningTime="2025-10-02 18:33:07.255202402 +0000 UTC m=+744.224645274" Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.299438 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=2.95373781 podStartE2EDuration="5.299415575s" podCreationTimestamp="2025-10-02 18:33:02 +0000 UTC" firstStartedPulling="2025-10-02 18:33:04.338999648 +0000 UTC m=+741.308442520" lastFinishedPulling="2025-10-02 18:33:06.684677413 +0000 UTC m=+743.654120285" observedRunningTime="2025-10-02 18:33:07.291810037 +0000 UTC m=+744.261252939" watchObservedRunningTime="2025-10-02 18:33:07.299415575 +0000 UTC m=+744.268858487" Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.315209 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=2.630849469 podStartE2EDuration="5.315190367s" podCreationTimestamp="2025-10-02 18:33:02 +0000 UTC" firstStartedPulling="2025-10-02 18:33:03.997363092 +0000 UTC m=+740.966805974" lastFinishedPulling="2025-10-02 18:33:06.681704 +0000 UTC m=+743.651146872" observedRunningTime="2025-10-02 18:33:07.311675047 +0000 UTC m=+744.281117949" watchObservedRunningTime="2025-10-02 18:33:07.315190367 +0000 UTC m=+744.284633259" Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.335944 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" podStartSLOduration=1.498350296 podStartE2EDuration="5.335928385s" podCreationTimestamp="2025-10-02 18:33:02 +0000 UTC" firstStartedPulling="2025-10-02 18:33:02.812361209 +0000 UTC m=+739.781804081" lastFinishedPulling="2025-10-02 18:33:06.649939298 +0000 UTC m=+743.619382170" observedRunningTime="2025-10-02 18:33:07.333811799 +0000 UTC m=+744.303254671" watchObservedRunningTime="2025-10-02 18:33:07.335928385 +0000 UTC m=+744.305371257" Oct 02 18:33:07 crc kubenswrapper[4832]: I1002 18:33:07.363605 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" podStartSLOduration=1.584026725 podStartE2EDuration="5.363584589s" podCreationTimestamp="2025-10-02 18:33:02 +0000 UTC" firstStartedPulling="2025-10-02 18:33:02.902855648 +0000 UTC m=+739.872298520" lastFinishedPulling="2025-10-02 18:33:06.682413512 +0000 UTC m=+743.651856384" observedRunningTime="2025-10-02 18:33:07.356724425 +0000 UTC m=+744.326167307" watchObservedRunningTime="2025-10-02 18:33:07.363584589 +0000 UTC m=+744.333027471" Oct 02 18:33:10 crc kubenswrapper[4832]: I1002 18:33:10.250791 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" event={"ID":"aa1f7d1a-2f01-4d70-b78c-0b28692ce57c","Type":"ContainerStarted","Data":"062ccebe4b033153003a941d1a0964f5078216a1813507d974cd2039bc956ee6"} Oct 02 18:33:10 crc kubenswrapper[4832]: I1002 18:33:10.250877 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:10 crc kubenswrapper[4832]: I1002 18:33:10.250901 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:10 crc kubenswrapper[4832]: I1002 18:33:10.254241 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" event={"ID":"242ef9e3-c339-468a-b6a7-298dfab16a59","Type":"ContainerStarted","Data":"9690d54646b9a4e4b803dad3badf4bb1107cbac043948d4684450adff1e6330c"} Oct 02 18:33:10 crc kubenswrapper[4832]: I1002 18:33:10.254625 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:10 crc kubenswrapper[4832]: I1002 18:33:10.254646 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:10 crc kubenswrapper[4832]: I1002 18:33:10.267410 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:10 crc kubenswrapper[4832]: I1002 18:33:10.274780 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:10 crc kubenswrapper[4832]: I1002 18:33:10.275170 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" Oct 02 18:33:10 crc kubenswrapper[4832]: I1002 18:33:10.279428 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" Oct 02 18:33:10 crc kubenswrapper[4832]: I1002 18:33:10.292171 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w48kx" podStartSLOduration=2.223385785 podStartE2EDuration="8.292147469s" podCreationTimestamp="2025-10-02 18:33:02 +0000 UTC" firstStartedPulling="2025-10-02 18:33:03.132383301 +0000 UTC m=+740.101826183" lastFinishedPulling="2025-10-02 18:33:09.201144995 +0000 UTC m=+746.170587867" observedRunningTime="2025-10-02 18:33:10.288393162 +0000 UTC m=+747.257836074" watchObservedRunningTime="2025-10-02 18:33:10.292147469 +0000 UTC m=+747.261590361" Oct 02 18:33:10 crc kubenswrapper[4832]: I1002 18:33:10.333214 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-6f7dfcd5dd-w4hsc" podStartSLOduration=2.282974907 podStartE2EDuration="8.333196913s" podCreationTimestamp="2025-10-02 18:33:02 +0000 UTC" firstStartedPulling="2025-10-02 18:33:03.157525666 +0000 UTC m=+740.126968538" lastFinishedPulling="2025-10-02 18:33:09.207747672 +0000 UTC m=+746.177190544" observedRunningTime="2025-10-02 18:33:10.331482658 +0000 UTC m=+747.300925530" watchObservedRunningTime="2025-10-02 18:33:10.333196913 +0000 UTC m=+747.302639785" Oct 02 18:33:22 crc kubenswrapper[4832]: I1002 18:33:22.241060 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-6f5f7fff97-x5gwf" Oct 02 18:33:22 crc kubenswrapper[4832]: I1002 18:33:22.430882 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5d954896cf-g55np" Oct 02 18:33:22 crc kubenswrapper[4832]: I1002 18:33:22.496253 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6fbbbc8b7d-kqxp2" Oct 02 18:33:23 crc kubenswrapper[4832]: I1002 18:33:23.435480 4832 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Oct 02 18:33:23 crc kubenswrapper[4832]: I1002 18:33:23.435855 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="757a6407-20f2-4b69-816a-6b01c7e5cc79" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 02 18:33:23 crc kubenswrapper[4832]: I1002 18:33:23.551960 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Oct 02 18:33:23 crc kubenswrapper[4832]: I1002 18:33:23.798208 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Oct 02 18:33:27 crc kubenswrapper[4832]: I1002 18:33:27.558504 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w89r8"] Oct 02 18:33:27 crc kubenswrapper[4832]: I1002 18:33:27.559286 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" podUID="a7c249ac-abfd-42b2-b391-5018d1695100" containerName="controller-manager" containerID="cri-o://2046419c7d3a119d092ba21e4ec46ed3b6c3327307604a1b8cbe0423d6eb68dc" gracePeriod=30 Oct 02 18:33:27 crc kubenswrapper[4832]: I1002 18:33:27.674652 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf"] Oct 02 18:33:27 crc kubenswrapper[4832]: I1002 18:33:27.674850 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" podUID="245c924a-8033-464a-be07-6e7ebbb7d814" containerName="route-controller-manager" containerID="cri-o://bdafae6aff7485ad592315bfa3a0e2faf89b33f8bf2f19a42be2044ddbf87971" gracePeriod=30 Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.178511 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.260468 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b74d579c4-dj2rk"] Oct 02 18:33:29 crc kubenswrapper[4832]: E1002 18:33:29.260884 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c249ac-abfd-42b2-b391-5018d1695100" containerName="controller-manager" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.260906 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c249ac-abfd-42b2-b391-5018d1695100" containerName="controller-manager" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.261104 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c249ac-abfd-42b2-b391-5018d1695100" containerName="controller-manager" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.261934 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b74d579c4-dj2rk"] Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.262054 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.277223 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7tfg\" (UniqueName: \"kubernetes.io/projected/a7c249ac-abfd-42b2-b391-5018d1695100-kube-api-access-l7tfg\") pod \"a7c249ac-abfd-42b2-b391-5018d1695100\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.277333 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-proxy-ca-bundles\") pod \"a7c249ac-abfd-42b2-b391-5018d1695100\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.277446 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-config\") pod \"a7c249ac-abfd-42b2-b391-5018d1695100\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.277499 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c249ac-abfd-42b2-b391-5018d1695100-serving-cert\") pod \"a7c249ac-abfd-42b2-b391-5018d1695100\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.277528 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-client-ca\") pod \"a7c249ac-abfd-42b2-b391-5018d1695100\" (UID: \"a7c249ac-abfd-42b2-b391-5018d1695100\") " Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.278187 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a7c249ac-abfd-42b2-b391-5018d1695100" (UID: "a7c249ac-abfd-42b2-b391-5018d1695100"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.278410 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-client-ca" (OuterVolumeSpecName: "client-ca") pod "a7c249ac-abfd-42b2-b391-5018d1695100" (UID: "a7c249ac-abfd-42b2-b391-5018d1695100"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.278771 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-config" (OuterVolumeSpecName: "config") pod "a7c249ac-abfd-42b2-b391-5018d1695100" (UID: "a7c249ac-abfd-42b2-b391-5018d1695100"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.284604 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c249ac-abfd-42b2-b391-5018d1695100-kube-api-access-l7tfg" (OuterVolumeSpecName: "kube-api-access-l7tfg") pod "a7c249ac-abfd-42b2-b391-5018d1695100" (UID: "a7c249ac-abfd-42b2-b391-5018d1695100"). InnerVolumeSpecName "kube-api-access-l7tfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.311941 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c249ac-abfd-42b2-b391-5018d1695100-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a7c249ac-abfd-42b2-b391-5018d1695100" (UID: "a7c249ac-abfd-42b2-b391-5018d1695100"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.368108 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.384976 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd403e11-4753-4ce1-8895-1dc7873bc40a-serving-cert\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.385088 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd403e11-4753-4ce1-8895-1dc7873bc40a-config\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.385243 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68fr6\" (UniqueName: \"kubernetes.io/projected/bd403e11-4753-4ce1-8895-1dc7873bc40a-kube-api-access-68fr6\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.385332 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd403e11-4753-4ce1-8895-1dc7873bc40a-proxy-ca-bundles\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.385568 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd403e11-4753-4ce1-8895-1dc7873bc40a-client-ca\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.385691 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7tfg\" (UniqueName: \"kubernetes.io/projected/a7c249ac-abfd-42b2-b391-5018d1695100-kube-api-access-l7tfg\") on node \"crc\" DevicePath \"\"" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.385746 4832 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.385767 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.385783 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7c249ac-abfd-42b2-b391-5018d1695100-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.385831 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7c249ac-abfd-42b2-b391-5018d1695100-client-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.414722 4832 generic.go:334] "Generic (PLEG): container finished" podID="a7c249ac-abfd-42b2-b391-5018d1695100" containerID="2046419c7d3a119d092ba21e4ec46ed3b6c3327307604a1b8cbe0423d6eb68dc" exitCode=0 Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.414828 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.414810 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" event={"ID":"a7c249ac-abfd-42b2-b391-5018d1695100","Type":"ContainerDied","Data":"2046419c7d3a119d092ba21e4ec46ed3b6c3327307604a1b8cbe0423d6eb68dc"} Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.415233 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-w89r8" event={"ID":"a7c249ac-abfd-42b2-b391-5018d1695100","Type":"ContainerDied","Data":"177f6bccad979fc0aa345a344515ec17c1c7d343da0e8ac5a7e1f8338b9e1483"} Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.415323 4832 scope.go:117] "RemoveContainer" containerID="2046419c7d3a119d092ba21e4ec46ed3b6c3327307604a1b8cbe0423d6eb68dc" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.417338 4832 generic.go:334] "Generic (PLEG): container finished" podID="245c924a-8033-464a-be07-6e7ebbb7d814" containerID="bdafae6aff7485ad592315bfa3a0e2faf89b33f8bf2f19a42be2044ddbf87971" exitCode=0 Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.417462 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" event={"ID":"245c924a-8033-464a-be07-6e7ebbb7d814","Type":"ContainerDied","Data":"bdafae6aff7485ad592315bfa3a0e2faf89b33f8bf2f19a42be2044ddbf87971"} Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.417465 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.417535 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf" event={"ID":"245c924a-8033-464a-be07-6e7ebbb7d814","Type":"ContainerDied","Data":"a54b626116619bd052d43850f12a2ac4d4c1cac2ae862f82009dc4a269613ff1"} Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.444107 4832 scope.go:117] "RemoveContainer" containerID="2046419c7d3a119d092ba21e4ec46ed3b6c3327307604a1b8cbe0423d6eb68dc" Oct 02 18:33:29 crc kubenswrapper[4832]: E1002 18:33:29.445044 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2046419c7d3a119d092ba21e4ec46ed3b6c3327307604a1b8cbe0423d6eb68dc\": container with ID starting with 2046419c7d3a119d092ba21e4ec46ed3b6c3327307604a1b8cbe0423d6eb68dc not found: ID does not exist" containerID="2046419c7d3a119d092ba21e4ec46ed3b6c3327307604a1b8cbe0423d6eb68dc" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.445146 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2046419c7d3a119d092ba21e4ec46ed3b6c3327307604a1b8cbe0423d6eb68dc"} err="failed to get container status \"2046419c7d3a119d092ba21e4ec46ed3b6c3327307604a1b8cbe0423d6eb68dc\": rpc error: code = NotFound desc = could not find container \"2046419c7d3a119d092ba21e4ec46ed3b6c3327307604a1b8cbe0423d6eb68dc\": container with ID starting with 2046419c7d3a119d092ba21e4ec46ed3b6c3327307604a1b8cbe0423d6eb68dc not found: ID does not exist" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.445231 4832 scope.go:117] "RemoveContainer" containerID="bdafae6aff7485ad592315bfa3a0e2faf89b33f8bf2f19a42be2044ddbf87971" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.448592 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w89r8"] Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.450457 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-w89r8"] Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.461917 4832 scope.go:117] "RemoveContainer" containerID="bdafae6aff7485ad592315bfa3a0e2faf89b33f8bf2f19a42be2044ddbf87971" Oct 02 18:33:29 crc kubenswrapper[4832]: E1002 18:33:29.462355 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdafae6aff7485ad592315bfa3a0e2faf89b33f8bf2f19a42be2044ddbf87971\": container with ID starting with bdafae6aff7485ad592315bfa3a0e2faf89b33f8bf2f19a42be2044ddbf87971 not found: ID does not exist" containerID="bdafae6aff7485ad592315bfa3a0e2faf89b33f8bf2f19a42be2044ddbf87971" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.462462 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdafae6aff7485ad592315bfa3a0e2faf89b33f8bf2f19a42be2044ddbf87971"} err="failed to get container status \"bdafae6aff7485ad592315bfa3a0e2faf89b33f8bf2f19a42be2044ddbf87971\": rpc error: code = NotFound desc = could not find container \"bdafae6aff7485ad592315bfa3a0e2faf89b33f8bf2f19a42be2044ddbf87971\": container with ID starting with bdafae6aff7485ad592315bfa3a0e2faf89b33f8bf2f19a42be2044ddbf87971 not found: ID does not exist" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.486364 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/245c924a-8033-464a-be07-6e7ebbb7d814-serving-cert\") pod \"245c924a-8033-464a-be07-6e7ebbb7d814\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.486408 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/245c924a-8033-464a-be07-6e7ebbb7d814-client-ca\") pod \"245c924a-8033-464a-be07-6e7ebbb7d814\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.486537 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/245c924a-8033-464a-be07-6e7ebbb7d814-config\") pod \"245c924a-8033-464a-be07-6e7ebbb7d814\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.486648 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pphbl\" (UniqueName: \"kubernetes.io/projected/245c924a-8033-464a-be07-6e7ebbb7d814-kube-api-access-pphbl\") pod \"245c924a-8033-464a-be07-6e7ebbb7d814\" (UID: \"245c924a-8033-464a-be07-6e7ebbb7d814\") " Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.486842 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd403e11-4753-4ce1-8895-1dc7873bc40a-client-ca\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.486896 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd403e11-4753-4ce1-8895-1dc7873bc40a-serving-cert\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.486921 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd403e11-4753-4ce1-8895-1dc7873bc40a-config\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.486960 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68fr6\" (UniqueName: \"kubernetes.io/projected/bd403e11-4753-4ce1-8895-1dc7873bc40a-kube-api-access-68fr6\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.486985 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd403e11-4753-4ce1-8895-1dc7873bc40a-proxy-ca-bundles\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.487791 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/245c924a-8033-464a-be07-6e7ebbb7d814-config" (OuterVolumeSpecName: "config") pod "245c924a-8033-464a-be07-6e7ebbb7d814" (UID: "245c924a-8033-464a-be07-6e7ebbb7d814"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.488117 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd403e11-4753-4ce1-8895-1dc7873bc40a-proxy-ca-bundles\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.488678 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd403e11-4753-4ce1-8895-1dc7873bc40a-config\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.488806 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/245c924a-8033-464a-be07-6e7ebbb7d814-client-ca" (OuterVolumeSpecName: "client-ca") pod "245c924a-8033-464a-be07-6e7ebbb7d814" (UID: "245c924a-8033-464a-be07-6e7ebbb7d814"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.489676 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd403e11-4753-4ce1-8895-1dc7873bc40a-client-ca\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.490382 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/245c924a-8033-464a-be07-6e7ebbb7d814-kube-api-access-pphbl" (OuterVolumeSpecName: "kube-api-access-pphbl") pod "245c924a-8033-464a-be07-6e7ebbb7d814" (UID: "245c924a-8033-464a-be07-6e7ebbb7d814"). InnerVolumeSpecName "kube-api-access-pphbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.490714 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/245c924a-8033-464a-be07-6e7ebbb7d814-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "245c924a-8033-464a-be07-6e7ebbb7d814" (UID: "245c924a-8033-464a-be07-6e7ebbb7d814"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.493047 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd403e11-4753-4ce1-8895-1dc7873bc40a-serving-cert\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.505409 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68fr6\" (UniqueName: \"kubernetes.io/projected/bd403e11-4753-4ce1-8895-1dc7873bc40a-kube-api-access-68fr6\") pod \"controller-manager-7b74d579c4-dj2rk\" (UID: \"bd403e11-4753-4ce1-8895-1dc7873bc40a\") " pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.579536 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.588612 4832 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/245c924a-8033-464a-be07-6e7ebbb7d814-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.588648 4832 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/245c924a-8033-464a-be07-6e7ebbb7d814-client-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.588659 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/245c924a-8033-464a-be07-6e7ebbb7d814-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.588671 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pphbl\" (UniqueName: \"kubernetes.io/projected/245c924a-8033-464a-be07-6e7ebbb7d814-kube-api-access-pphbl\") on node \"crc\" DevicePath \"\"" Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.749432 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf"] Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.754172 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xgwwf"] Oct 02 18:33:29 crc kubenswrapper[4832]: I1002 18:33:29.982459 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b74d579c4-dj2rk"] Oct 02 18:33:29 crc kubenswrapper[4832]: W1002 18:33:29.993897 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd403e11_4753_4ce1_8895_1dc7873bc40a.slice/crio-ccd0dae27aef766366a0240f7ed8f7b12ee5d36e1af28870f206b51d981be9f6 WatchSource:0}: Error finding container ccd0dae27aef766366a0240f7ed8f7b12ee5d36e1af28870f206b51d981be9f6: Status 404 returned error can't find the container with id ccd0dae27aef766366a0240f7ed8f7b12ee5d36e1af28870f206b51d981be9f6 Oct 02 18:33:30 crc kubenswrapper[4832]: I1002 18:33:30.425966 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" event={"ID":"bd403e11-4753-4ce1-8895-1dc7873bc40a","Type":"ContainerStarted","Data":"9ef33199fcca5f20a3a24767fcf6fa66b56cadd4703af2115f17bd8e9e4efd34"} Oct 02 18:33:30 crc kubenswrapper[4832]: I1002 18:33:30.426022 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" event={"ID":"bd403e11-4753-4ce1-8895-1dc7873bc40a","Type":"ContainerStarted","Data":"ccd0dae27aef766366a0240f7ed8f7b12ee5d36e1af28870f206b51d981be9f6"} Oct 02 18:33:30 crc kubenswrapper[4832]: I1002 18:33:30.426377 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:30 crc kubenswrapper[4832]: I1002 18:33:30.433767 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" Oct 02 18:33:30 crc kubenswrapper[4832]: I1002 18:33:30.444186 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b74d579c4-dj2rk" podStartSLOduration=3.444163777 podStartE2EDuration="3.444163777s" podCreationTimestamp="2025-10-02 18:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:33:30.440800422 +0000 UTC m=+767.410243294" watchObservedRunningTime="2025-10-02 18:33:30.444163777 +0000 UTC m=+767.413606649" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.239780 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="245c924a-8033-464a-be07-6e7ebbb7d814" path="/var/lib/kubelet/pods/245c924a-8033-464a-be07-6e7ebbb7d814/volumes" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.241145 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c249ac-abfd-42b2-b391-5018d1695100" path="/var/lib/kubelet/pods/a7c249ac-abfd-42b2-b391-5018d1695100/volumes" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.523279 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl"] Oct 02 18:33:31 crc kubenswrapper[4832]: E1002 18:33:31.523595 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="245c924a-8033-464a-be07-6e7ebbb7d814" containerName="route-controller-manager" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.523610 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="245c924a-8033-464a-be07-6e7ebbb7d814" containerName="route-controller-manager" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.523755 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="245c924a-8033-464a-be07-6e7ebbb7d814" containerName="route-controller-manager" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.524315 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.529446 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.529701 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.529896 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.530046 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.530249 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.531242 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.546985 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl"] Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.631295 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3b8d8b9-e6f7-404f-8541-9e640332a995-config\") pod \"route-controller-manager-54975f5f46-wrfzl\" (UID: \"e3b8d8b9-e6f7-404f-8541-9e640332a995\") " pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.631365 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6gzd\" (UniqueName: \"kubernetes.io/projected/e3b8d8b9-e6f7-404f-8541-9e640332a995-kube-api-access-n6gzd\") pod \"route-controller-manager-54975f5f46-wrfzl\" (UID: \"e3b8d8b9-e6f7-404f-8541-9e640332a995\") " pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.631651 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3b8d8b9-e6f7-404f-8541-9e640332a995-client-ca\") pod \"route-controller-manager-54975f5f46-wrfzl\" (UID: \"e3b8d8b9-e6f7-404f-8541-9e640332a995\") " pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.631807 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3b8d8b9-e6f7-404f-8541-9e640332a995-serving-cert\") pod \"route-controller-manager-54975f5f46-wrfzl\" (UID: \"e3b8d8b9-e6f7-404f-8541-9e640332a995\") " pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.733087 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3b8d8b9-e6f7-404f-8541-9e640332a995-client-ca\") pod \"route-controller-manager-54975f5f46-wrfzl\" (UID: \"e3b8d8b9-e6f7-404f-8541-9e640332a995\") " pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.733197 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3b8d8b9-e6f7-404f-8541-9e640332a995-serving-cert\") pod \"route-controller-manager-54975f5f46-wrfzl\" (UID: \"e3b8d8b9-e6f7-404f-8541-9e640332a995\") " pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.733246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3b8d8b9-e6f7-404f-8541-9e640332a995-config\") pod \"route-controller-manager-54975f5f46-wrfzl\" (UID: \"e3b8d8b9-e6f7-404f-8541-9e640332a995\") " pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.733323 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6gzd\" (UniqueName: \"kubernetes.io/projected/e3b8d8b9-e6f7-404f-8541-9e640332a995-kube-api-access-n6gzd\") pod \"route-controller-manager-54975f5f46-wrfzl\" (UID: \"e3b8d8b9-e6f7-404f-8541-9e640332a995\") " pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.734391 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e3b8d8b9-e6f7-404f-8541-9e640332a995-client-ca\") pod \"route-controller-manager-54975f5f46-wrfzl\" (UID: \"e3b8d8b9-e6f7-404f-8541-9e640332a995\") " pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.735194 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3b8d8b9-e6f7-404f-8541-9e640332a995-config\") pod \"route-controller-manager-54975f5f46-wrfzl\" (UID: \"e3b8d8b9-e6f7-404f-8541-9e640332a995\") " pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.742696 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3b8d8b9-e6f7-404f-8541-9e640332a995-serving-cert\") pod \"route-controller-manager-54975f5f46-wrfzl\" (UID: \"e3b8d8b9-e6f7-404f-8541-9e640332a995\") " pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.758097 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6gzd\" (UniqueName: \"kubernetes.io/projected/e3b8d8b9-e6f7-404f-8541-9e640332a995-kube-api-access-n6gzd\") pod \"route-controller-manager-54975f5f46-wrfzl\" (UID: \"e3b8d8b9-e6f7-404f-8541-9e640332a995\") " pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:31 crc kubenswrapper[4832]: I1002 18:33:31.886495 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:32 crc kubenswrapper[4832]: I1002 18:33:32.343306 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl"] Oct 02 18:33:32 crc kubenswrapper[4832]: I1002 18:33:32.445218 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" event={"ID":"e3b8d8b9-e6f7-404f-8541-9e640332a995","Type":"ContainerStarted","Data":"672a4ac7a1cd5fadef592d2fdbe6a35e71c305085bc3cfd6046ebd40df0b762c"} Oct 02 18:33:33 crc kubenswrapper[4832]: I1002 18:33:33.434703 4832 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Oct 02 18:33:33 crc kubenswrapper[4832]: I1002 18:33:33.434803 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="757a6407-20f2-4b69-816a-6b01c7e5cc79" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 02 18:33:33 crc kubenswrapper[4832]: I1002 18:33:33.458239 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" event={"ID":"e3b8d8b9-e6f7-404f-8541-9e640332a995","Type":"ContainerStarted","Data":"1146b2a7d261b32ec5463c0efceade9f127f23876c85498d6ae1dbe12ca61505"} Oct 02 18:33:33 crc kubenswrapper[4832]: I1002 18:33:33.458672 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:33 crc kubenswrapper[4832]: I1002 18:33:33.466440 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" Oct 02 18:33:33 crc kubenswrapper[4832]: I1002 18:33:33.500999 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54975f5f46-wrfzl" podStartSLOduration=6.500967665 podStartE2EDuration="6.500967665s" podCreationTimestamp="2025-10-02 18:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:33:33.486798882 +0000 UTC m=+770.456241794" watchObservedRunningTime="2025-10-02 18:33:33.500967665 +0000 UTC m=+770.470410577" Oct 02 18:33:34 crc kubenswrapper[4832]: I1002 18:33:34.868857 4832 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 02 18:33:41 crc kubenswrapper[4832]: I1002 18:33:41.756052 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sg88p"] Oct 02 18:33:41 crc kubenswrapper[4832]: I1002 18:33:41.759197 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:41 crc kubenswrapper[4832]: I1002 18:33:41.768220 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg88p"] Oct 02 18:33:41 crc kubenswrapper[4832]: I1002 18:33:41.828191 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0b51385-1cb8-44a8-8900-24373fae7d60-catalog-content\") pod \"redhat-marketplace-sg88p\" (UID: \"e0b51385-1cb8-44a8-8900-24373fae7d60\") " pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:41 crc kubenswrapper[4832]: I1002 18:33:41.828277 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0b51385-1cb8-44a8-8900-24373fae7d60-utilities\") pod \"redhat-marketplace-sg88p\" (UID: \"e0b51385-1cb8-44a8-8900-24373fae7d60\") " pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:41 crc kubenswrapper[4832]: I1002 18:33:41.828310 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dpw4\" (UniqueName: \"kubernetes.io/projected/e0b51385-1cb8-44a8-8900-24373fae7d60-kube-api-access-7dpw4\") pod \"redhat-marketplace-sg88p\" (UID: \"e0b51385-1cb8-44a8-8900-24373fae7d60\") " pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:41 crc kubenswrapper[4832]: I1002 18:33:41.930227 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0b51385-1cb8-44a8-8900-24373fae7d60-catalog-content\") pod \"redhat-marketplace-sg88p\" (UID: \"e0b51385-1cb8-44a8-8900-24373fae7d60\") " pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:41 crc kubenswrapper[4832]: I1002 18:33:41.930323 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0b51385-1cb8-44a8-8900-24373fae7d60-utilities\") pod \"redhat-marketplace-sg88p\" (UID: \"e0b51385-1cb8-44a8-8900-24373fae7d60\") " pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:41 crc kubenswrapper[4832]: I1002 18:33:41.930360 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dpw4\" (UniqueName: \"kubernetes.io/projected/e0b51385-1cb8-44a8-8900-24373fae7d60-kube-api-access-7dpw4\") pod \"redhat-marketplace-sg88p\" (UID: \"e0b51385-1cb8-44a8-8900-24373fae7d60\") " pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:41 crc kubenswrapper[4832]: I1002 18:33:41.930762 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0b51385-1cb8-44a8-8900-24373fae7d60-catalog-content\") pod \"redhat-marketplace-sg88p\" (UID: \"e0b51385-1cb8-44a8-8900-24373fae7d60\") " pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:41 crc kubenswrapper[4832]: I1002 18:33:41.930933 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0b51385-1cb8-44a8-8900-24373fae7d60-utilities\") pod \"redhat-marketplace-sg88p\" (UID: \"e0b51385-1cb8-44a8-8900-24373fae7d60\") " pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:41 crc kubenswrapper[4832]: I1002 18:33:41.957386 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dpw4\" (UniqueName: \"kubernetes.io/projected/e0b51385-1cb8-44a8-8900-24373fae7d60-kube-api-access-7dpw4\") pod \"redhat-marketplace-sg88p\" (UID: \"e0b51385-1cb8-44a8-8900-24373fae7d60\") " pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:42 crc kubenswrapper[4832]: I1002 18:33:42.096335 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:42 crc kubenswrapper[4832]: I1002 18:33:42.530802 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg88p"] Oct 02 18:33:43 crc kubenswrapper[4832]: I1002 18:33:43.433739 4832 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Oct 02 18:33:43 crc kubenswrapper[4832]: I1002 18:33:43.434534 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="757a6407-20f2-4b69-816a-6b01c7e5cc79" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 02 18:33:43 crc kubenswrapper[4832]: I1002 18:33:43.561691 4832 generic.go:334] "Generic (PLEG): container finished" podID="e0b51385-1cb8-44a8-8900-24373fae7d60" containerID="46e4ef5ba429f7735ed7354e44d1d0bfc2dfcb802d57fce054a46bdacd4add3b" exitCode=0 Oct 02 18:33:43 crc kubenswrapper[4832]: I1002 18:33:43.561760 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg88p" event={"ID":"e0b51385-1cb8-44a8-8900-24373fae7d60","Type":"ContainerDied","Data":"46e4ef5ba429f7735ed7354e44d1d0bfc2dfcb802d57fce054a46bdacd4add3b"} Oct 02 18:33:43 crc kubenswrapper[4832]: I1002 18:33:43.561802 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg88p" event={"ID":"e0b51385-1cb8-44a8-8900-24373fae7d60","Type":"ContainerStarted","Data":"56e7ada1681e3046fac63052ea80482f0f59457e1b024d24aeddd96c5971782c"} Oct 02 18:33:45 crc kubenswrapper[4832]: I1002 18:33:45.580687 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg88p" event={"ID":"e0b51385-1cb8-44a8-8900-24373fae7d60","Type":"ContainerStarted","Data":"7b0e9756c2dba776304c2cb0ee0903e77aa72f0edb9e741d4bc9bf0c69de57ab"} Oct 02 18:33:46 crc kubenswrapper[4832]: I1002 18:33:46.595571 4832 generic.go:334] "Generic (PLEG): container finished" podID="e0b51385-1cb8-44a8-8900-24373fae7d60" containerID="7b0e9756c2dba776304c2cb0ee0903e77aa72f0edb9e741d4bc9bf0c69de57ab" exitCode=0 Oct 02 18:33:46 crc kubenswrapper[4832]: I1002 18:33:46.595705 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg88p" event={"ID":"e0b51385-1cb8-44a8-8900-24373fae7d60","Type":"ContainerDied","Data":"7b0e9756c2dba776304c2cb0ee0903e77aa72f0edb9e741d4bc9bf0c69de57ab"} Oct 02 18:33:48 crc kubenswrapper[4832]: I1002 18:33:48.610716 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg88p" event={"ID":"e0b51385-1cb8-44a8-8900-24373fae7d60","Type":"ContainerStarted","Data":"30f53e626d1e7c4e28a8a6d7bac65a3f57fde06385e70d75b822c18769973202"} Oct 02 18:33:48 crc kubenswrapper[4832]: I1002 18:33:48.634009 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sg88p" podStartSLOduration=3.70797599 podStartE2EDuration="7.633991082s" podCreationTimestamp="2025-10-02 18:33:41 +0000 UTC" firstStartedPulling="2025-10-02 18:33:43.565447955 +0000 UTC m=+780.534890847" lastFinishedPulling="2025-10-02 18:33:47.491463057 +0000 UTC m=+784.460905939" observedRunningTime="2025-10-02 18:33:48.633825387 +0000 UTC m=+785.603268359" watchObservedRunningTime="2025-10-02 18:33:48.633991082 +0000 UTC m=+785.603433954" Oct 02 18:33:52 crc kubenswrapper[4832]: I1002 18:33:52.097343 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:52 crc kubenswrapper[4832]: I1002 18:33:52.097953 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:52 crc kubenswrapper[4832]: I1002 18:33:52.160214 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:52 crc kubenswrapper[4832]: I1002 18:33:52.721306 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:52 crc kubenswrapper[4832]: I1002 18:33:52.802025 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg88p"] Oct 02 18:33:53 crc kubenswrapper[4832]: I1002 18:33:53.432346 4832 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Oct 02 18:33:53 crc kubenswrapper[4832]: I1002 18:33:53.433942 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="757a6407-20f2-4b69-816a-6b01c7e5cc79" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 02 18:33:54 crc kubenswrapper[4832]: I1002 18:33:54.660433 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sg88p" podUID="e0b51385-1cb8-44a8-8900-24373fae7d60" containerName="registry-server" containerID="cri-o://30f53e626d1e7c4e28a8a6d7bac65a3f57fde06385e70d75b822c18769973202" gracePeriod=2 Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.263215 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.376844 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0b51385-1cb8-44a8-8900-24373fae7d60-utilities\") pod \"e0b51385-1cb8-44a8-8900-24373fae7d60\" (UID: \"e0b51385-1cb8-44a8-8900-24373fae7d60\") " Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.376941 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0b51385-1cb8-44a8-8900-24373fae7d60-catalog-content\") pod \"e0b51385-1cb8-44a8-8900-24373fae7d60\" (UID: \"e0b51385-1cb8-44a8-8900-24373fae7d60\") " Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.377100 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dpw4\" (UniqueName: \"kubernetes.io/projected/e0b51385-1cb8-44a8-8900-24373fae7d60-kube-api-access-7dpw4\") pod \"e0b51385-1cb8-44a8-8900-24373fae7d60\" (UID: \"e0b51385-1cb8-44a8-8900-24373fae7d60\") " Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.379176 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0b51385-1cb8-44a8-8900-24373fae7d60-utilities" (OuterVolumeSpecName: "utilities") pod "e0b51385-1cb8-44a8-8900-24373fae7d60" (UID: "e0b51385-1cb8-44a8-8900-24373fae7d60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.399566 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b51385-1cb8-44a8-8900-24373fae7d60-kube-api-access-7dpw4" (OuterVolumeSpecName: "kube-api-access-7dpw4") pod "e0b51385-1cb8-44a8-8900-24373fae7d60" (UID: "e0b51385-1cb8-44a8-8900-24373fae7d60"). InnerVolumeSpecName "kube-api-access-7dpw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.409472 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0b51385-1cb8-44a8-8900-24373fae7d60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0b51385-1cb8-44a8-8900-24373fae7d60" (UID: "e0b51385-1cb8-44a8-8900-24373fae7d60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.479627 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0b51385-1cb8-44a8-8900-24373fae7d60-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.479673 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dpw4\" (UniqueName: \"kubernetes.io/projected/e0b51385-1cb8-44a8-8900-24373fae7d60-kube-api-access-7dpw4\") on node \"crc\" DevicePath \"\"" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.479690 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0b51385-1cb8-44a8-8900-24373fae7d60-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.679817 4832 generic.go:334] "Generic (PLEG): container finished" podID="e0b51385-1cb8-44a8-8900-24373fae7d60" containerID="30f53e626d1e7c4e28a8a6d7bac65a3f57fde06385e70d75b822c18769973202" exitCode=0 Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.679875 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg88p" event={"ID":"e0b51385-1cb8-44a8-8900-24373fae7d60","Type":"ContainerDied","Data":"30f53e626d1e7c4e28a8a6d7bac65a3f57fde06385e70d75b822c18769973202"} Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.680128 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg88p" event={"ID":"e0b51385-1cb8-44a8-8900-24373fae7d60","Type":"ContainerDied","Data":"56e7ada1681e3046fac63052ea80482f0f59457e1b024d24aeddd96c5971782c"} Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.680161 4832 scope.go:117] "RemoveContainer" containerID="30f53e626d1e7c4e28a8a6d7bac65a3f57fde06385e70d75b822c18769973202" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.679895 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg88p" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.704361 4832 scope.go:117] "RemoveContainer" containerID="7b0e9756c2dba776304c2cb0ee0903e77aa72f0edb9e741d4bc9bf0c69de57ab" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.746102 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg88p"] Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.747153 4832 scope.go:117] "RemoveContainer" containerID="46e4ef5ba429f7735ed7354e44d1d0bfc2dfcb802d57fce054a46bdacd4add3b" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.753606 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg88p"] Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.767735 4832 scope.go:117] "RemoveContainer" containerID="30f53e626d1e7c4e28a8a6d7bac65a3f57fde06385e70d75b822c18769973202" Oct 02 18:33:55 crc kubenswrapper[4832]: E1002 18:33:55.768531 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30f53e626d1e7c4e28a8a6d7bac65a3f57fde06385e70d75b822c18769973202\": container with ID starting with 30f53e626d1e7c4e28a8a6d7bac65a3f57fde06385e70d75b822c18769973202 not found: ID does not exist" containerID="30f53e626d1e7c4e28a8a6d7bac65a3f57fde06385e70d75b822c18769973202" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.768569 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30f53e626d1e7c4e28a8a6d7bac65a3f57fde06385e70d75b822c18769973202"} err="failed to get container status \"30f53e626d1e7c4e28a8a6d7bac65a3f57fde06385e70d75b822c18769973202\": rpc error: code = NotFound desc = could not find container \"30f53e626d1e7c4e28a8a6d7bac65a3f57fde06385e70d75b822c18769973202\": container with ID starting with 30f53e626d1e7c4e28a8a6d7bac65a3f57fde06385e70d75b822c18769973202 not found: ID does not exist" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.768594 4832 scope.go:117] "RemoveContainer" containerID="7b0e9756c2dba776304c2cb0ee0903e77aa72f0edb9e741d4bc9bf0c69de57ab" Oct 02 18:33:55 crc kubenswrapper[4832]: E1002 18:33:55.768998 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b0e9756c2dba776304c2cb0ee0903e77aa72f0edb9e741d4bc9bf0c69de57ab\": container with ID starting with 7b0e9756c2dba776304c2cb0ee0903e77aa72f0edb9e741d4bc9bf0c69de57ab not found: ID does not exist" containerID="7b0e9756c2dba776304c2cb0ee0903e77aa72f0edb9e741d4bc9bf0c69de57ab" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.769038 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0e9756c2dba776304c2cb0ee0903e77aa72f0edb9e741d4bc9bf0c69de57ab"} err="failed to get container status \"7b0e9756c2dba776304c2cb0ee0903e77aa72f0edb9e741d4bc9bf0c69de57ab\": rpc error: code = NotFound desc = could not find container \"7b0e9756c2dba776304c2cb0ee0903e77aa72f0edb9e741d4bc9bf0c69de57ab\": container with ID starting with 7b0e9756c2dba776304c2cb0ee0903e77aa72f0edb9e741d4bc9bf0c69de57ab not found: ID does not exist" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.769070 4832 scope.go:117] "RemoveContainer" containerID="46e4ef5ba429f7735ed7354e44d1d0bfc2dfcb802d57fce054a46bdacd4add3b" Oct 02 18:33:55 crc kubenswrapper[4832]: E1002 18:33:55.769518 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46e4ef5ba429f7735ed7354e44d1d0bfc2dfcb802d57fce054a46bdacd4add3b\": container with ID starting with 46e4ef5ba429f7735ed7354e44d1d0bfc2dfcb802d57fce054a46bdacd4add3b not found: ID does not exist" containerID="46e4ef5ba429f7735ed7354e44d1d0bfc2dfcb802d57fce054a46bdacd4add3b" Oct 02 18:33:55 crc kubenswrapper[4832]: I1002 18:33:55.769546 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e4ef5ba429f7735ed7354e44d1d0bfc2dfcb802d57fce054a46bdacd4add3b"} err="failed to get container status \"46e4ef5ba429f7735ed7354e44d1d0bfc2dfcb802d57fce054a46bdacd4add3b\": rpc error: code = NotFound desc = could not find container \"46e4ef5ba429f7735ed7354e44d1d0bfc2dfcb802d57fce054a46bdacd4add3b\": container with ID starting with 46e4ef5ba429f7735ed7354e44d1d0bfc2dfcb802d57fce054a46bdacd4add3b not found: ID does not exist" Oct 02 18:33:56 crc kubenswrapper[4832]: I1002 18:33:56.875498 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:33:56 crc kubenswrapper[4832]: I1002 18:33:56.875882 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:33:57 crc kubenswrapper[4832]: I1002 18:33:57.237503 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0b51385-1cb8-44a8-8900-24373fae7d60" path="/var/lib/kubelet/pods/e0b51385-1cb8-44a8-8900-24373fae7d60/volumes" Oct 02 18:34:03 crc kubenswrapper[4832]: I1002 18:34:03.429335 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.243479 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-bw5sm"] Oct 02 18:34:23 crc kubenswrapper[4832]: E1002 18:34:23.244151 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b51385-1cb8-44a8-8900-24373fae7d60" containerName="extract-utilities" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.244163 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b51385-1cb8-44a8-8900-24373fae7d60" containerName="extract-utilities" Oct 02 18:34:23 crc kubenswrapper[4832]: E1002 18:34:23.244183 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b51385-1cb8-44a8-8900-24373fae7d60" containerName="extract-content" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.244189 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b51385-1cb8-44a8-8900-24373fae7d60" containerName="extract-content" Oct 02 18:34:23 crc kubenswrapper[4832]: E1002 18:34:23.244200 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b51385-1cb8-44a8-8900-24373fae7d60" containerName="registry-server" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.244207 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b51385-1cb8-44a8-8900-24373fae7d60" containerName="registry-server" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.244338 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0b51385-1cb8-44a8-8900-24373fae7d60" containerName="registry-server" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.244870 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.246236 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-bw5sm"] Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.250957 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-z4hn7" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.250978 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.251330 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.251536 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.251798 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.259700 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.371613 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-metrics\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.371724 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-config\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.371867 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f1a038ba-6805-46d6-89c5-b9f63c13b83e-sa-token\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.371926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-collector-token\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.371973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f1a038ba-6805-46d6-89c5-b9f63c13b83e-datadir\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.372009 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-config-openshift-service-cacrt\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.372041 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-entrypoint\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.372079 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnsf6\" (UniqueName: \"kubernetes.io/projected/f1a038ba-6805-46d6-89c5-b9f63c13b83e-kube-api-access-mnsf6\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.372123 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-collector-syslog-receiver\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.372234 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-trusted-ca\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.372570 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1a038ba-6805-46d6-89c5-b9f63c13b83e-tmp\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.401884 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-bw5sm"] Oct 02 18:34:23 crc kubenswrapper[4832]: E1002 18:34:23.402626 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-mnsf6 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-bw5sm" podUID="f1a038ba-6805-46d6-89c5-b9f63c13b83e" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.474388 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-metrics\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.474695 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-config\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.474898 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f1a038ba-6805-46d6-89c5-b9f63c13b83e-sa-token\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.475037 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-collector-token\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.475169 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f1a038ba-6805-46d6-89c5-b9f63c13b83e-datadir\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.475309 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-config-openshift-service-cacrt\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.475440 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-entrypoint\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.475558 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnsf6\" (UniqueName: \"kubernetes.io/projected/f1a038ba-6805-46d6-89c5-b9f63c13b83e-kube-api-access-mnsf6\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.475676 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-collector-syslog-receiver\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.475828 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-trusted-ca\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.475971 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1a038ba-6805-46d6-89c5-b9f63c13b83e-tmp\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.477545 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f1a038ba-6805-46d6-89c5-b9f63c13b83e-datadir\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.478514 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-config-openshift-service-cacrt\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.478538 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-trusted-ca\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.479082 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-config\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.479255 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-entrypoint\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.482957 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-collector-syslog-receiver\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.492948 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-metrics\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.498379 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1a038ba-6805-46d6-89c5-b9f63c13b83e-tmp\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.501650 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f1a038ba-6805-46d6-89c5-b9f63c13b83e-sa-token\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.501800 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnsf6\" (UniqueName: \"kubernetes.io/projected/f1a038ba-6805-46d6-89c5-b9f63c13b83e-kube-api-access-mnsf6\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.505068 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-collector-token\") pod \"collector-bw5sm\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.926993 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-bw5sm" Oct 02 18:34:23 crc kubenswrapper[4832]: I1002 18:34:23.943389 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-bw5sm" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.085829 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-config-openshift-service-cacrt\") pod \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.085932 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-metrics\") pod \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.085979 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-collector-token\") pod \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.086025 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-trusted-ca\") pod \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.086049 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-entrypoint\") pod \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.086114 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnsf6\" (UniqueName: \"kubernetes.io/projected/f1a038ba-6805-46d6-89c5-b9f63c13b83e-kube-api-access-mnsf6\") pod \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.086161 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-config\") pod \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.086193 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f1a038ba-6805-46d6-89c5-b9f63c13b83e-sa-token\") pod \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.086248 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f1a038ba-6805-46d6-89c5-b9f63c13b83e-datadir\") pod \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.086307 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-collector-syslog-receiver\") pod \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.086354 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1a038ba-6805-46d6-89c5-b9f63c13b83e-tmp\") pod \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\" (UID: \"f1a038ba-6805-46d6-89c5-b9f63c13b83e\") " Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.086673 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "f1a038ba-6805-46d6-89c5-b9f63c13b83e" (UID: "f1a038ba-6805-46d6-89c5-b9f63c13b83e"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.087197 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1a038ba-6805-46d6-89c5-b9f63c13b83e-datadir" (OuterVolumeSpecName: "datadir") pod "f1a038ba-6805-46d6-89c5-b9f63c13b83e" (UID: "f1a038ba-6805-46d6-89c5-b9f63c13b83e"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.087509 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f1a038ba-6805-46d6-89c5-b9f63c13b83e" (UID: "f1a038ba-6805-46d6-89c5-b9f63c13b83e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.087593 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-config" (OuterVolumeSpecName: "config") pod "f1a038ba-6805-46d6-89c5-b9f63c13b83e" (UID: "f1a038ba-6805-46d6-89c5-b9f63c13b83e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.088183 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "f1a038ba-6805-46d6-89c5-b9f63c13b83e" (UID: "f1a038ba-6805-46d6-89c5-b9f63c13b83e"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.091344 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "f1a038ba-6805-46d6-89c5-b9f63c13b83e" (UID: "f1a038ba-6805-46d6-89c5-b9f63c13b83e"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.092425 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-collector-token" (OuterVolumeSpecName: "collector-token") pod "f1a038ba-6805-46d6-89c5-b9f63c13b83e" (UID: "f1a038ba-6805-46d6-89c5-b9f63c13b83e"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.092715 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-metrics" (OuterVolumeSpecName: "metrics") pod "f1a038ba-6805-46d6-89c5-b9f63c13b83e" (UID: "f1a038ba-6805-46d6-89c5-b9f63c13b83e"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.093313 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a038ba-6805-46d6-89c5-b9f63c13b83e-sa-token" (OuterVolumeSpecName: "sa-token") pod "f1a038ba-6805-46d6-89c5-b9f63c13b83e" (UID: "f1a038ba-6805-46d6-89c5-b9f63c13b83e"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.093631 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a038ba-6805-46d6-89c5-b9f63c13b83e-kube-api-access-mnsf6" (OuterVolumeSpecName: "kube-api-access-mnsf6") pod "f1a038ba-6805-46d6-89c5-b9f63c13b83e" (UID: "f1a038ba-6805-46d6-89c5-b9f63c13b83e"). InnerVolumeSpecName "kube-api-access-mnsf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.094514 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a038ba-6805-46d6-89c5-b9f63c13b83e-tmp" (OuterVolumeSpecName: "tmp") pod "f1a038ba-6805-46d6-89c5-b9f63c13b83e" (UID: "f1a038ba-6805-46d6-89c5-b9f63c13b83e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.188943 4832 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f1a038ba-6805-46d6-89c5-b9f63c13b83e-sa-token\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.188989 4832 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f1a038ba-6805-46d6-89c5-b9f63c13b83e-datadir\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.189002 4832 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.189017 4832 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f1a038ba-6805-46d6-89c5-b9f63c13b83e-tmp\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.189026 4832 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.189035 4832 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-metrics\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.189044 4832 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f1a038ba-6805-46d6-89c5-b9f63c13b83e-collector-token\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.189053 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.189061 4832 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-entrypoint\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.189071 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnsf6\" (UniqueName: \"kubernetes.io/projected/f1a038ba-6805-46d6-89c5-b9f63c13b83e-kube-api-access-mnsf6\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.189081 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a038ba-6805-46d6-89c5-b9f63c13b83e-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.935227 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-bw5sm" Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.989947 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-bw5sm"] Oct 02 18:34:24 crc kubenswrapper[4832]: I1002 18:34:24.995177 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-bw5sm"] Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.010770 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-jtbs5"] Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.011705 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.014921 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.015152 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.015373 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.015418 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.015506 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-z4hn7" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.030597 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.041328 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-jtbs5"] Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.103768 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-collector-syslog-receiver\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.103806 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-config\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.103833 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-config-openshift-service-cacrt\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.103850 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-collector-token\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.103869 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqrtc\" (UniqueName: \"kubernetes.io/projected/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-kube-api-access-tqrtc\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.103885 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-trusted-ca\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.103973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-tmp\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.103993 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-metrics\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.104114 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-datadir\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.104209 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-sa-token\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.104353 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-entrypoint\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.205729 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-entrypoint\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.205850 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-collector-syslog-receiver\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.205890 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-config\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.205935 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-config-openshift-service-cacrt\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.205969 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-collector-token\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.206013 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqrtc\" (UniqueName: \"kubernetes.io/projected/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-kube-api-access-tqrtc\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.206052 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-trusted-ca\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.206102 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-tmp\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.206141 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-metrics\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.206204 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-datadir\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.206252 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-sa-token\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.206683 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-entrypoint\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.206787 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-datadir\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.207601 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-config\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.207767 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-config-openshift-service-cacrt\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.208101 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-trusted-ca\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.211102 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-tmp\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.213552 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-metrics\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.213884 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-collector-token\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.214905 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-collector-syslog-receiver\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.230002 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-sa-token\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.230828 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqrtc\" (UniqueName: \"kubernetes.io/projected/07997706-bdc1-4e87-a7cb-9f5e4b85ea9c-kube-api-access-tqrtc\") pod \"collector-jtbs5\" (UID: \"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c\") " pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.248812 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a038ba-6805-46d6-89c5-b9f63c13b83e" path="/var/lib/kubelet/pods/f1a038ba-6805-46d6-89c5-b9f63c13b83e/volumes" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.343216 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-jtbs5" Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.895717 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-jtbs5"] Oct 02 18:34:25 crc kubenswrapper[4832]: I1002 18:34:25.944196 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-jtbs5" event={"ID":"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c","Type":"ContainerStarted","Data":"fd92770dfe3cefb51150451f81bc2554a19dab2330fecddb49eae451d0aca5b5"} Oct 02 18:34:26 crc kubenswrapper[4832]: I1002 18:34:26.876101 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:34:26 crc kubenswrapper[4832]: I1002 18:34:26.876336 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:34:29 crc kubenswrapper[4832]: I1002 18:34:29.735405 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xdnvl"] Oct 02 18:34:29 crc kubenswrapper[4832]: I1002 18:34:29.738127 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:29 crc kubenswrapper[4832]: I1002 18:34:29.745028 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xdnvl"] Oct 02 18:34:29 crc kubenswrapper[4832]: I1002 18:34:29.797575 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623543b0-0cef-4136-9b1f-7881329b4c58-catalog-content\") pod \"community-operators-xdnvl\" (UID: \"623543b0-0cef-4136-9b1f-7881329b4c58\") " pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:29 crc kubenswrapper[4832]: I1002 18:34:29.797616 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t85hh\" (UniqueName: \"kubernetes.io/projected/623543b0-0cef-4136-9b1f-7881329b4c58-kube-api-access-t85hh\") pod \"community-operators-xdnvl\" (UID: \"623543b0-0cef-4136-9b1f-7881329b4c58\") " pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:29 crc kubenswrapper[4832]: I1002 18:34:29.797702 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623543b0-0cef-4136-9b1f-7881329b4c58-utilities\") pod \"community-operators-xdnvl\" (UID: \"623543b0-0cef-4136-9b1f-7881329b4c58\") " pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:29 crc kubenswrapper[4832]: I1002 18:34:29.899431 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623543b0-0cef-4136-9b1f-7881329b4c58-utilities\") pod \"community-operators-xdnvl\" (UID: \"623543b0-0cef-4136-9b1f-7881329b4c58\") " pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:29 crc kubenswrapper[4832]: I1002 18:34:29.899649 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623543b0-0cef-4136-9b1f-7881329b4c58-catalog-content\") pod \"community-operators-xdnvl\" (UID: \"623543b0-0cef-4136-9b1f-7881329b4c58\") " pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:29 crc kubenswrapper[4832]: I1002 18:34:29.899689 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t85hh\" (UniqueName: \"kubernetes.io/projected/623543b0-0cef-4136-9b1f-7881329b4c58-kube-api-access-t85hh\") pod \"community-operators-xdnvl\" (UID: \"623543b0-0cef-4136-9b1f-7881329b4c58\") " pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:29 crc kubenswrapper[4832]: I1002 18:34:29.900365 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623543b0-0cef-4136-9b1f-7881329b4c58-catalog-content\") pod \"community-operators-xdnvl\" (UID: \"623543b0-0cef-4136-9b1f-7881329b4c58\") " pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:29 crc kubenswrapper[4832]: I1002 18:34:29.902360 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623543b0-0cef-4136-9b1f-7881329b4c58-utilities\") pod \"community-operators-xdnvl\" (UID: \"623543b0-0cef-4136-9b1f-7881329b4c58\") " pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:29 crc kubenswrapper[4832]: I1002 18:34:29.917958 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t85hh\" (UniqueName: \"kubernetes.io/projected/623543b0-0cef-4136-9b1f-7881329b4c58-kube-api-access-t85hh\") pod \"community-operators-xdnvl\" (UID: \"623543b0-0cef-4136-9b1f-7881329b4c58\") " pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:30 crc kubenswrapper[4832]: I1002 18:34:30.064743 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:34 crc kubenswrapper[4832]: I1002 18:34:34.230816 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xdnvl"] Oct 02 18:34:35 crc kubenswrapper[4832]: I1002 18:34:35.018520 4832 generic.go:334] "Generic (PLEG): container finished" podID="623543b0-0cef-4136-9b1f-7881329b4c58" containerID="0c60d029c80ea515c6b3fbe8091574fcfb84dc1cc2634d34e0b7d0001fce639c" exitCode=0 Oct 02 18:34:35 crc kubenswrapper[4832]: I1002 18:34:35.018616 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdnvl" event={"ID":"623543b0-0cef-4136-9b1f-7881329b4c58","Type":"ContainerDied","Data":"0c60d029c80ea515c6b3fbe8091574fcfb84dc1cc2634d34e0b7d0001fce639c"} Oct 02 18:34:35 crc kubenswrapper[4832]: I1002 18:34:35.019177 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdnvl" event={"ID":"623543b0-0cef-4136-9b1f-7881329b4c58","Type":"ContainerStarted","Data":"7a115b4d366788db9179876bc1c54d02b28e005f56439d522089371e7ee0536f"} Oct 02 18:34:35 crc kubenswrapper[4832]: I1002 18:34:35.024323 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-jtbs5" event={"ID":"07997706-bdc1-4e87-a7cb-9f5e4b85ea9c","Type":"ContainerStarted","Data":"5f2fc4e11f06aa593456853fa63157f834ee95f0886a5e397b26abe0f5cb5299"} Oct 02 18:34:35 crc kubenswrapper[4832]: I1002 18:34:35.088911 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-jtbs5" podStartSLOduration=2.810624683 podStartE2EDuration="11.088891836s" podCreationTimestamp="2025-10-02 18:34:24 +0000 UTC" firstStartedPulling="2025-10-02 18:34:25.906723186 +0000 UTC m=+822.876166058" lastFinishedPulling="2025-10-02 18:34:34.184990339 +0000 UTC m=+831.154433211" observedRunningTime="2025-10-02 18:34:35.071213393 +0000 UTC m=+832.040656305" watchObservedRunningTime="2025-10-02 18:34:35.088891836 +0000 UTC m=+832.058334708" Oct 02 18:34:37 crc kubenswrapper[4832]: I1002 18:34:37.044643 4832 generic.go:334] "Generic (PLEG): container finished" podID="623543b0-0cef-4136-9b1f-7881329b4c58" containerID="66cabeaf3c4a425df26ab26a42a0732c139e51265a5f4495d7a35d84e6527920" exitCode=0 Oct 02 18:34:37 crc kubenswrapper[4832]: I1002 18:34:37.044732 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdnvl" event={"ID":"623543b0-0cef-4136-9b1f-7881329b4c58","Type":"ContainerDied","Data":"66cabeaf3c4a425df26ab26a42a0732c139e51265a5f4495d7a35d84e6527920"} Oct 02 18:34:38 crc kubenswrapper[4832]: I1002 18:34:38.054809 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdnvl" event={"ID":"623543b0-0cef-4136-9b1f-7881329b4c58","Type":"ContainerStarted","Data":"c102b9eed4eb0371c7f99e922211e7b4c5b1988627019f2b919d30c4b9b92d90"} Oct 02 18:34:38 crc kubenswrapper[4832]: I1002 18:34:38.075096 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xdnvl" podStartSLOduration=6.444591512 podStartE2EDuration="9.075070977s" podCreationTimestamp="2025-10-02 18:34:29 +0000 UTC" firstStartedPulling="2025-10-02 18:34:35.025424002 +0000 UTC m=+831.994866874" lastFinishedPulling="2025-10-02 18:34:37.655903467 +0000 UTC m=+834.625346339" observedRunningTime="2025-10-02 18:34:38.073288031 +0000 UTC m=+835.042730903" watchObservedRunningTime="2025-10-02 18:34:38.075070977 +0000 UTC m=+835.044513859" Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.005956 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4q7ql"] Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.007308 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.032153 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4q7ql"] Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.065989 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.066062 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.072579 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5316025d-e034-4277-9f14-2184ca0fe9a6-catalog-content\") pod \"redhat-operators-4q7ql\" (UID: \"5316025d-e034-4277-9f14-2184ca0fe9a6\") " pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.072635 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs5sh\" (UniqueName: \"kubernetes.io/projected/5316025d-e034-4277-9f14-2184ca0fe9a6-kube-api-access-zs5sh\") pod \"redhat-operators-4q7ql\" (UID: \"5316025d-e034-4277-9f14-2184ca0fe9a6\") " pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.072748 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5316025d-e034-4277-9f14-2184ca0fe9a6-utilities\") pod \"redhat-operators-4q7ql\" (UID: \"5316025d-e034-4277-9f14-2184ca0fe9a6\") " pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.174482 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5316025d-e034-4277-9f14-2184ca0fe9a6-catalog-content\") pod \"redhat-operators-4q7ql\" (UID: \"5316025d-e034-4277-9f14-2184ca0fe9a6\") " pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.174525 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs5sh\" (UniqueName: \"kubernetes.io/projected/5316025d-e034-4277-9f14-2184ca0fe9a6-kube-api-access-zs5sh\") pod \"redhat-operators-4q7ql\" (UID: \"5316025d-e034-4277-9f14-2184ca0fe9a6\") " pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.174675 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5316025d-e034-4277-9f14-2184ca0fe9a6-utilities\") pod \"redhat-operators-4q7ql\" (UID: \"5316025d-e034-4277-9f14-2184ca0fe9a6\") " pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.174998 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5316025d-e034-4277-9f14-2184ca0fe9a6-catalog-content\") pod \"redhat-operators-4q7ql\" (UID: \"5316025d-e034-4277-9f14-2184ca0fe9a6\") " pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.175193 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5316025d-e034-4277-9f14-2184ca0fe9a6-utilities\") pod \"redhat-operators-4q7ql\" (UID: \"5316025d-e034-4277-9f14-2184ca0fe9a6\") " pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.208238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs5sh\" (UniqueName: \"kubernetes.io/projected/5316025d-e034-4277-9f14-2184ca0fe9a6-kube-api-access-zs5sh\") pod \"redhat-operators-4q7ql\" (UID: \"5316025d-e034-4277-9f14-2184ca0fe9a6\") " pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.330192 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:40 crc kubenswrapper[4832]: I1002 18:34:40.748874 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4q7ql"] Oct 02 18:34:40 crc kubenswrapper[4832]: W1002 18:34:40.755831 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5316025d_e034_4277_9f14_2184ca0fe9a6.slice/crio-193ff3f10a44f4ea4500aa9ecf3c65a0d4af6ea3c77c070eb4e08af26e1c1cc9 WatchSource:0}: Error finding container 193ff3f10a44f4ea4500aa9ecf3c65a0d4af6ea3c77c070eb4e08af26e1c1cc9: Status 404 returned error can't find the container with id 193ff3f10a44f4ea4500aa9ecf3c65a0d4af6ea3c77c070eb4e08af26e1c1cc9 Oct 02 18:34:41 crc kubenswrapper[4832]: I1002 18:34:41.076942 4832 generic.go:334] "Generic (PLEG): container finished" podID="5316025d-e034-4277-9f14-2184ca0fe9a6" containerID="dd16bd5651f8c361d555f175833fce5380ffe86d07c8d9b837a488683c5e9bd3" exitCode=0 Oct 02 18:34:41 crc kubenswrapper[4832]: I1002 18:34:41.076986 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4q7ql" event={"ID":"5316025d-e034-4277-9f14-2184ca0fe9a6","Type":"ContainerDied","Data":"dd16bd5651f8c361d555f175833fce5380ffe86d07c8d9b837a488683c5e9bd3"} Oct 02 18:34:41 crc kubenswrapper[4832]: I1002 18:34:41.077014 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4q7ql" event={"ID":"5316025d-e034-4277-9f14-2184ca0fe9a6","Type":"ContainerStarted","Data":"193ff3f10a44f4ea4500aa9ecf3c65a0d4af6ea3c77c070eb4e08af26e1c1cc9"} Oct 02 18:34:41 crc kubenswrapper[4832]: I1002 18:34:41.127612 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xdnvl" podUID="623543b0-0cef-4136-9b1f-7881329b4c58" containerName="registry-server" probeResult="failure" output=< Oct 02 18:34:41 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 18:34:41 crc kubenswrapper[4832]: > Oct 02 18:34:42 crc kubenswrapper[4832]: I1002 18:34:42.792438 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xw58r"] Oct 02 18:34:42 crc kubenswrapper[4832]: I1002 18:34:42.794011 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:42 crc kubenswrapper[4832]: I1002 18:34:42.807255 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xw58r"] Oct 02 18:34:42 crc kubenswrapper[4832]: I1002 18:34:42.919582 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggj69\" (UniqueName: \"kubernetes.io/projected/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-kube-api-access-ggj69\") pod \"certified-operators-xw58r\" (UID: \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\") " pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:42 crc kubenswrapper[4832]: I1002 18:34:42.919911 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-utilities\") pod \"certified-operators-xw58r\" (UID: \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\") " pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:42 crc kubenswrapper[4832]: I1002 18:34:42.920011 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-catalog-content\") pod \"certified-operators-xw58r\" (UID: \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\") " pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:43 crc kubenswrapper[4832]: I1002 18:34:43.022109 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggj69\" (UniqueName: \"kubernetes.io/projected/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-kube-api-access-ggj69\") pod \"certified-operators-xw58r\" (UID: \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\") " pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:43 crc kubenswrapper[4832]: I1002 18:34:43.022155 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-utilities\") pod \"certified-operators-xw58r\" (UID: \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\") " pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:43 crc kubenswrapper[4832]: I1002 18:34:43.022208 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-catalog-content\") pod \"certified-operators-xw58r\" (UID: \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\") " pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:43 crc kubenswrapper[4832]: I1002 18:34:43.022743 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-utilities\") pod \"certified-operators-xw58r\" (UID: \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\") " pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:43 crc kubenswrapper[4832]: I1002 18:34:43.022823 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-catalog-content\") pod \"certified-operators-xw58r\" (UID: \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\") " pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:43 crc kubenswrapper[4832]: I1002 18:34:43.040424 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggj69\" (UniqueName: \"kubernetes.io/projected/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-kube-api-access-ggj69\") pod \"certified-operators-xw58r\" (UID: \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\") " pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:43 crc kubenswrapper[4832]: I1002 18:34:43.093689 4832 generic.go:334] "Generic (PLEG): container finished" podID="5316025d-e034-4277-9f14-2184ca0fe9a6" containerID="854f5059b820755dff4e9add431fc76cb040a0ef4e3336d0595482b81cfcc2a1" exitCode=0 Oct 02 18:34:43 crc kubenswrapper[4832]: I1002 18:34:43.093737 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4q7ql" event={"ID":"5316025d-e034-4277-9f14-2184ca0fe9a6","Type":"ContainerDied","Data":"854f5059b820755dff4e9add431fc76cb040a0ef4e3336d0595482b81cfcc2a1"} Oct 02 18:34:43 crc kubenswrapper[4832]: I1002 18:34:43.106868 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:43 crc kubenswrapper[4832]: I1002 18:34:43.557835 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xw58r"] Oct 02 18:34:43 crc kubenswrapper[4832]: W1002 18:34:43.566320 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd15bc8b_af7b_40a8_b6d2_ae84051c9c19.slice/crio-d8ba9c43c331177c3945e0a698438b20bfe8bdb213fc9fd24249f72983a8ddcd WatchSource:0}: Error finding container d8ba9c43c331177c3945e0a698438b20bfe8bdb213fc9fd24249f72983a8ddcd: Status 404 returned error can't find the container with id d8ba9c43c331177c3945e0a698438b20bfe8bdb213fc9fd24249f72983a8ddcd Oct 02 18:34:44 crc kubenswrapper[4832]: I1002 18:34:44.109902 4832 generic.go:334] "Generic (PLEG): container finished" podID="fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" containerID="3ec00c42b65ed7aba6b3998f4ee1a4277a10d23f10654790d9bd52f231370507" exitCode=0 Oct 02 18:34:44 crc kubenswrapper[4832]: I1002 18:34:44.110030 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xw58r" event={"ID":"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19","Type":"ContainerDied","Data":"3ec00c42b65ed7aba6b3998f4ee1a4277a10d23f10654790d9bd52f231370507"} Oct 02 18:34:44 crc kubenswrapper[4832]: I1002 18:34:44.110092 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xw58r" event={"ID":"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19","Type":"ContainerStarted","Data":"d8ba9c43c331177c3945e0a698438b20bfe8bdb213fc9fd24249f72983a8ddcd"} Oct 02 18:34:44 crc kubenswrapper[4832]: I1002 18:34:44.117367 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4q7ql" event={"ID":"5316025d-e034-4277-9f14-2184ca0fe9a6","Type":"ContainerStarted","Data":"0be4c5bef5bcf1277cc9d3d3a8c7a6d6339fcff3c09b6e1fd5966ad1c260785b"} Oct 02 18:34:44 crc kubenswrapper[4832]: I1002 18:34:44.154205 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4q7ql" podStartSLOduration=2.734667342 podStartE2EDuration="5.154172994s" podCreationTimestamp="2025-10-02 18:34:39 +0000 UTC" firstStartedPulling="2025-10-02 18:34:41.08054293 +0000 UTC m=+838.049985812" lastFinishedPulling="2025-10-02 18:34:43.500048592 +0000 UTC m=+840.469491464" observedRunningTime="2025-10-02 18:34:44.153150522 +0000 UTC m=+841.122593424" watchObservedRunningTime="2025-10-02 18:34:44.154172994 +0000 UTC m=+841.123615906" Oct 02 18:34:46 crc kubenswrapper[4832]: I1002 18:34:46.137423 4832 generic.go:334] "Generic (PLEG): container finished" podID="fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" containerID="eb229f22c383b596b173caf6397f3cd6fc89d40b6d38d20d080c89128cd515fa" exitCode=0 Oct 02 18:34:46 crc kubenswrapper[4832]: I1002 18:34:46.137521 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xw58r" event={"ID":"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19","Type":"ContainerDied","Data":"eb229f22c383b596b173caf6397f3cd6fc89d40b6d38d20d080c89128cd515fa"} Oct 02 18:34:47 crc kubenswrapper[4832]: I1002 18:34:47.148337 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xw58r" event={"ID":"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19","Type":"ContainerStarted","Data":"9d0fd22fb7b75c444a7fadfde8841f3c06aa019953be3a49b7b2941a2c5ca107"} Oct 02 18:34:47 crc kubenswrapper[4832]: I1002 18:34:47.168515 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xw58r" podStartSLOduration=2.437596049 podStartE2EDuration="5.168493839s" podCreationTimestamp="2025-10-02 18:34:42 +0000 UTC" firstStartedPulling="2025-10-02 18:34:44.113446941 +0000 UTC m=+841.082889833" lastFinishedPulling="2025-10-02 18:34:46.844344751 +0000 UTC m=+843.813787623" observedRunningTime="2025-10-02 18:34:47.162947014 +0000 UTC m=+844.132389886" watchObservedRunningTime="2025-10-02 18:34:47.168493839 +0000 UTC m=+844.137936721" Oct 02 18:34:50 crc kubenswrapper[4832]: I1002 18:34:50.148553 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:50 crc kubenswrapper[4832]: I1002 18:34:50.222213 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:50 crc kubenswrapper[4832]: I1002 18:34:50.330358 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:50 crc kubenswrapper[4832]: I1002 18:34:50.330434 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:50 crc kubenswrapper[4832]: I1002 18:34:50.371994 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:50 crc kubenswrapper[4832]: I1002 18:34:50.577628 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xdnvl"] Oct 02 18:34:51 crc kubenswrapper[4832]: I1002 18:34:51.188769 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xdnvl" podUID="623543b0-0cef-4136-9b1f-7881329b4c58" containerName="registry-server" containerID="cri-o://c102b9eed4eb0371c7f99e922211e7b4c5b1988627019f2b919d30c4b9b92d90" gracePeriod=2 Oct 02 18:34:51 crc kubenswrapper[4832]: I1002 18:34:51.261032 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:51 crc kubenswrapper[4832]: E1002 18:34:51.908630 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod623543b0_0cef_4136_9b1f_7881329b4c58.slice/crio-conmon-c102b9eed4eb0371c7f99e922211e7b4c5b1988627019f2b919d30c4b9b92d90.scope\": RecentStats: unable to find data in memory cache]" Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.198412 4832 generic.go:334] "Generic (PLEG): container finished" podID="623543b0-0cef-4136-9b1f-7881329b4c58" containerID="c102b9eed4eb0371c7f99e922211e7b4c5b1988627019f2b919d30c4b9b92d90" exitCode=0 Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.198473 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdnvl" event={"ID":"623543b0-0cef-4136-9b1f-7881329b4c58","Type":"ContainerDied","Data":"c102b9eed4eb0371c7f99e922211e7b4c5b1988627019f2b919d30c4b9b92d90"} Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.198534 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdnvl" event={"ID":"623543b0-0cef-4136-9b1f-7881329b4c58","Type":"ContainerDied","Data":"7a115b4d366788db9179876bc1c54d02b28e005f56439d522089371e7ee0536f"} Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.198554 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a115b4d366788db9179876bc1c54d02b28e005f56439d522089371e7ee0536f" Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.233705 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.292927 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t85hh\" (UniqueName: \"kubernetes.io/projected/623543b0-0cef-4136-9b1f-7881329b4c58-kube-api-access-t85hh\") pod \"623543b0-0cef-4136-9b1f-7881329b4c58\" (UID: \"623543b0-0cef-4136-9b1f-7881329b4c58\") " Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.293027 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623543b0-0cef-4136-9b1f-7881329b4c58-utilities\") pod \"623543b0-0cef-4136-9b1f-7881329b4c58\" (UID: \"623543b0-0cef-4136-9b1f-7881329b4c58\") " Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.293189 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623543b0-0cef-4136-9b1f-7881329b4c58-catalog-content\") pod \"623543b0-0cef-4136-9b1f-7881329b4c58\" (UID: \"623543b0-0cef-4136-9b1f-7881329b4c58\") " Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.294481 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/623543b0-0cef-4136-9b1f-7881329b4c58-utilities" (OuterVolumeSpecName: "utilities") pod "623543b0-0cef-4136-9b1f-7881329b4c58" (UID: "623543b0-0cef-4136-9b1f-7881329b4c58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.305774 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/623543b0-0cef-4136-9b1f-7881329b4c58-kube-api-access-t85hh" (OuterVolumeSpecName: "kube-api-access-t85hh") pod "623543b0-0cef-4136-9b1f-7881329b4c58" (UID: "623543b0-0cef-4136-9b1f-7881329b4c58"). InnerVolumeSpecName "kube-api-access-t85hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.356914 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/623543b0-0cef-4136-9b1f-7881329b4c58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "623543b0-0cef-4136-9b1f-7881329b4c58" (UID: "623543b0-0cef-4136-9b1f-7881329b4c58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.395886 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/623543b0-0cef-4136-9b1f-7881329b4c58-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.395944 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t85hh\" (UniqueName: \"kubernetes.io/projected/623543b0-0cef-4136-9b1f-7881329b4c58-kube-api-access-t85hh\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.395968 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/623543b0-0cef-4136-9b1f-7881329b4c58-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:52 crc kubenswrapper[4832]: I1002 18:34:52.780699 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4q7ql"] Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.108424 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.108807 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.191540 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.213456 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdnvl" Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.222412 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4q7ql" podUID="5316025d-e034-4277-9f14-2184ca0fe9a6" containerName="registry-server" containerID="cri-o://0be4c5bef5bcf1277cc9d3d3a8c7a6d6339fcff3c09b6e1fd5966ad1c260785b" gracePeriod=2 Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.271521 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xdnvl"] Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.277054 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xdnvl"] Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.278970 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.706779 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.824965 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5316025d-e034-4277-9f14-2184ca0fe9a6-catalog-content\") pod \"5316025d-e034-4277-9f14-2184ca0fe9a6\" (UID: \"5316025d-e034-4277-9f14-2184ca0fe9a6\") " Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.825047 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5316025d-e034-4277-9f14-2184ca0fe9a6-utilities\") pod \"5316025d-e034-4277-9f14-2184ca0fe9a6\" (UID: \"5316025d-e034-4277-9f14-2184ca0fe9a6\") " Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.825136 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs5sh\" (UniqueName: \"kubernetes.io/projected/5316025d-e034-4277-9f14-2184ca0fe9a6-kube-api-access-zs5sh\") pod \"5316025d-e034-4277-9f14-2184ca0fe9a6\" (UID: \"5316025d-e034-4277-9f14-2184ca0fe9a6\") " Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.825978 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5316025d-e034-4277-9f14-2184ca0fe9a6-utilities" (OuterVolumeSpecName: "utilities") pod "5316025d-e034-4277-9f14-2184ca0fe9a6" (UID: "5316025d-e034-4277-9f14-2184ca0fe9a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.832734 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5316025d-e034-4277-9f14-2184ca0fe9a6-kube-api-access-zs5sh" (OuterVolumeSpecName: "kube-api-access-zs5sh") pod "5316025d-e034-4277-9f14-2184ca0fe9a6" (UID: "5316025d-e034-4277-9f14-2184ca0fe9a6"). InnerVolumeSpecName "kube-api-access-zs5sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.900224 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5316025d-e034-4277-9f14-2184ca0fe9a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5316025d-e034-4277-9f14-2184ca0fe9a6" (UID: "5316025d-e034-4277-9f14-2184ca0fe9a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.927301 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs5sh\" (UniqueName: \"kubernetes.io/projected/5316025d-e034-4277-9f14-2184ca0fe9a6-kube-api-access-zs5sh\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.927340 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5316025d-e034-4277-9f14-2184ca0fe9a6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:53 crc kubenswrapper[4832]: I1002 18:34:53.927353 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5316025d-e034-4277-9f14-2184ca0fe9a6-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.223555 4832 generic.go:334] "Generic (PLEG): container finished" podID="5316025d-e034-4277-9f14-2184ca0fe9a6" containerID="0be4c5bef5bcf1277cc9d3d3a8c7a6d6339fcff3c09b6e1fd5966ad1c260785b" exitCode=0 Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.224411 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4q7ql" Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.224518 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4q7ql" event={"ID":"5316025d-e034-4277-9f14-2184ca0fe9a6","Type":"ContainerDied","Data":"0be4c5bef5bcf1277cc9d3d3a8c7a6d6339fcff3c09b6e1fd5966ad1c260785b"} Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.224543 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4q7ql" event={"ID":"5316025d-e034-4277-9f14-2184ca0fe9a6","Type":"ContainerDied","Data":"193ff3f10a44f4ea4500aa9ecf3c65a0d4af6ea3c77c070eb4e08af26e1c1cc9"} Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.224561 4832 scope.go:117] "RemoveContainer" containerID="0be4c5bef5bcf1277cc9d3d3a8c7a6d6339fcff3c09b6e1fd5966ad1c260785b" Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.257852 4832 scope.go:117] "RemoveContainer" containerID="854f5059b820755dff4e9add431fc76cb040a0ef4e3336d0595482b81cfcc2a1" Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.281589 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4q7ql"] Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.286800 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4q7ql"] Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.299814 4832 scope.go:117] "RemoveContainer" containerID="dd16bd5651f8c361d555f175833fce5380ffe86d07c8d9b837a488683c5e9bd3" Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.331753 4832 scope.go:117] "RemoveContainer" containerID="0be4c5bef5bcf1277cc9d3d3a8c7a6d6339fcff3c09b6e1fd5966ad1c260785b" Oct 02 18:34:54 crc kubenswrapper[4832]: E1002 18:34:54.332475 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0be4c5bef5bcf1277cc9d3d3a8c7a6d6339fcff3c09b6e1fd5966ad1c260785b\": container with ID starting with 0be4c5bef5bcf1277cc9d3d3a8c7a6d6339fcff3c09b6e1fd5966ad1c260785b not found: ID does not exist" containerID="0be4c5bef5bcf1277cc9d3d3a8c7a6d6339fcff3c09b6e1fd5966ad1c260785b" Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.332541 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be4c5bef5bcf1277cc9d3d3a8c7a6d6339fcff3c09b6e1fd5966ad1c260785b"} err="failed to get container status \"0be4c5bef5bcf1277cc9d3d3a8c7a6d6339fcff3c09b6e1fd5966ad1c260785b\": rpc error: code = NotFound desc = could not find container \"0be4c5bef5bcf1277cc9d3d3a8c7a6d6339fcff3c09b6e1fd5966ad1c260785b\": container with ID starting with 0be4c5bef5bcf1277cc9d3d3a8c7a6d6339fcff3c09b6e1fd5966ad1c260785b not found: ID does not exist" Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.332576 4832 scope.go:117] "RemoveContainer" containerID="854f5059b820755dff4e9add431fc76cb040a0ef4e3336d0595482b81cfcc2a1" Oct 02 18:34:54 crc kubenswrapper[4832]: E1002 18:34:54.334380 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"854f5059b820755dff4e9add431fc76cb040a0ef4e3336d0595482b81cfcc2a1\": container with ID starting with 854f5059b820755dff4e9add431fc76cb040a0ef4e3336d0595482b81cfcc2a1 not found: ID does not exist" containerID="854f5059b820755dff4e9add431fc76cb040a0ef4e3336d0595482b81cfcc2a1" Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.334461 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"854f5059b820755dff4e9add431fc76cb040a0ef4e3336d0595482b81cfcc2a1"} err="failed to get container status \"854f5059b820755dff4e9add431fc76cb040a0ef4e3336d0595482b81cfcc2a1\": rpc error: code = NotFound desc = could not find container \"854f5059b820755dff4e9add431fc76cb040a0ef4e3336d0595482b81cfcc2a1\": container with ID starting with 854f5059b820755dff4e9add431fc76cb040a0ef4e3336d0595482b81cfcc2a1 not found: ID does not exist" Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.334514 4832 scope.go:117] "RemoveContainer" containerID="dd16bd5651f8c361d555f175833fce5380ffe86d07c8d9b837a488683c5e9bd3" Oct 02 18:34:54 crc kubenswrapper[4832]: E1002 18:34:54.335071 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd16bd5651f8c361d555f175833fce5380ffe86d07c8d9b837a488683c5e9bd3\": container with ID starting with dd16bd5651f8c361d555f175833fce5380ffe86d07c8d9b837a488683c5e9bd3 not found: ID does not exist" containerID="dd16bd5651f8c361d555f175833fce5380ffe86d07c8d9b837a488683c5e9bd3" Oct 02 18:34:54 crc kubenswrapper[4832]: I1002 18:34:54.335102 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd16bd5651f8c361d555f175833fce5380ffe86d07c8d9b837a488683c5e9bd3"} err="failed to get container status \"dd16bd5651f8c361d555f175833fce5380ffe86d07c8d9b837a488683c5e9bd3\": rpc error: code = NotFound desc = could not find container \"dd16bd5651f8c361d555f175833fce5380ffe86d07c8d9b837a488683c5e9bd3\": container with ID starting with dd16bd5651f8c361d555f175833fce5380ffe86d07c8d9b837a488683c5e9bd3 not found: ID does not exist" Oct 02 18:34:55 crc kubenswrapper[4832]: I1002 18:34:55.239133 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5316025d-e034-4277-9f14-2184ca0fe9a6" path="/var/lib/kubelet/pods/5316025d-e034-4277-9f14-2184ca0fe9a6/volumes" Oct 02 18:34:55 crc kubenswrapper[4832]: I1002 18:34:55.240516 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="623543b0-0cef-4136-9b1f-7881329b4c58" path="/var/lib/kubelet/pods/623543b0-0cef-4136-9b1f-7881329b4c58/volumes" Oct 02 18:34:56 crc kubenswrapper[4832]: I1002 18:34:56.876646 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:34:56 crc kubenswrapper[4832]: I1002 18:34:56.877624 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:34:56 crc kubenswrapper[4832]: I1002 18:34:56.877700 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:34:56 crc kubenswrapper[4832]: I1002 18:34:56.878711 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"39c61194f4de266798fee5bed294464e772af0f2983e7b44f1e151219ed48151"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 18:34:56 crc kubenswrapper[4832]: I1002 18:34:56.878818 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://39c61194f4de266798fee5bed294464e772af0f2983e7b44f1e151219ed48151" gracePeriod=600 Oct 02 18:34:56 crc kubenswrapper[4832]: I1002 18:34:56.994854 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xw58r"] Oct 02 18:34:56 crc kubenswrapper[4832]: I1002 18:34:56.995371 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xw58r" podUID="fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" containerName="registry-server" containerID="cri-o://9d0fd22fb7b75c444a7fadfde8841f3c06aa019953be3a49b7b2941a2c5ca107" gracePeriod=2 Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.270175 4832 generic.go:334] "Generic (PLEG): container finished" podID="fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" containerID="9d0fd22fb7b75c444a7fadfde8841f3c06aa019953be3a49b7b2941a2c5ca107" exitCode=0 Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.270235 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xw58r" event={"ID":"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19","Type":"ContainerDied","Data":"9d0fd22fb7b75c444a7fadfde8841f3c06aa019953be3a49b7b2941a2c5ca107"} Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.275748 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="39c61194f4de266798fee5bed294464e772af0f2983e7b44f1e151219ed48151" exitCode=0 Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.275796 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"39c61194f4de266798fee5bed294464e772af0f2983e7b44f1e151219ed48151"} Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.275824 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"65d3e2d93b30b70c1447cb55a9f5b7b0ff104d9e0a2d6e88b49ea2c6960bc4e2"} Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.275840 4832 scope.go:117] "RemoveContainer" containerID="20aef20185e6aef6323d2c1f8a7e5979029b54a32ae8f3401b046519bdae37e1" Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.361969 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.490686 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-utilities\") pod \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\" (UID: \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\") " Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.490840 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggj69\" (UniqueName: \"kubernetes.io/projected/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-kube-api-access-ggj69\") pod \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\" (UID: \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\") " Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.490890 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-catalog-content\") pod \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\" (UID: \"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19\") " Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.491919 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-utilities" (OuterVolumeSpecName: "utilities") pod "fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" (UID: "fd15bc8b-af7b-40a8-b6d2-ae84051c9c19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.498510 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-kube-api-access-ggj69" (OuterVolumeSpecName: "kube-api-access-ggj69") pod "fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" (UID: "fd15bc8b-af7b-40a8-b6d2-ae84051c9c19"). InnerVolumeSpecName "kube-api-access-ggj69". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.531822 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" (UID: "fd15bc8b-af7b-40a8-b6d2-ae84051c9c19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.592866 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggj69\" (UniqueName: \"kubernetes.io/projected/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-kube-api-access-ggj69\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.592924 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:57 crc kubenswrapper[4832]: I1002 18:34:57.592941 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:34:58 crc kubenswrapper[4832]: I1002 18:34:58.290710 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xw58r" event={"ID":"fd15bc8b-af7b-40a8-b6d2-ae84051c9c19","Type":"ContainerDied","Data":"d8ba9c43c331177c3945e0a698438b20bfe8bdb213fc9fd24249f72983a8ddcd"} Oct 02 18:34:58 crc kubenswrapper[4832]: I1002 18:34:58.291044 4832 scope.go:117] "RemoveContainer" containerID="9d0fd22fb7b75c444a7fadfde8841f3c06aa019953be3a49b7b2941a2c5ca107" Oct 02 18:34:58 crc kubenswrapper[4832]: I1002 18:34:58.290754 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xw58r" Oct 02 18:34:58 crc kubenswrapper[4832]: I1002 18:34:58.323825 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xw58r"] Oct 02 18:34:58 crc kubenswrapper[4832]: I1002 18:34:58.325207 4832 scope.go:117] "RemoveContainer" containerID="eb229f22c383b596b173caf6397f3cd6fc89d40b6d38d20d080c89128cd515fa" Oct 02 18:34:58 crc kubenswrapper[4832]: I1002 18:34:58.328393 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xw58r"] Oct 02 18:34:58 crc kubenswrapper[4832]: I1002 18:34:58.344664 4832 scope.go:117] "RemoveContainer" containerID="3ec00c42b65ed7aba6b3998f4ee1a4277a10d23f10654790d9bd52f231370507" Oct 02 18:34:59 crc kubenswrapper[4832]: I1002 18:34:59.233911 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" path="/var/lib/kubelet/pods/fd15bc8b-af7b-40a8-b6d2-ae84051c9c19/volumes" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.847936 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh"] Oct 02 18:35:02 crc kubenswrapper[4832]: E1002 18:35:02.848819 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5316025d-e034-4277-9f14-2184ca0fe9a6" containerName="extract-utilities" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.848836 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5316025d-e034-4277-9f14-2184ca0fe9a6" containerName="extract-utilities" Oct 02 18:35:02 crc kubenswrapper[4832]: E1002 18:35:02.848853 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5316025d-e034-4277-9f14-2184ca0fe9a6" containerName="registry-server" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.848861 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5316025d-e034-4277-9f14-2184ca0fe9a6" containerName="registry-server" Oct 02 18:35:02 crc kubenswrapper[4832]: E1002 18:35:02.848870 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5316025d-e034-4277-9f14-2184ca0fe9a6" containerName="extract-content" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.848877 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5316025d-e034-4277-9f14-2184ca0fe9a6" containerName="extract-content" Oct 02 18:35:02 crc kubenswrapper[4832]: E1002 18:35:02.848889 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" containerName="extract-content" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.848897 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" containerName="extract-content" Oct 02 18:35:02 crc kubenswrapper[4832]: E1002 18:35:02.848910 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" containerName="extract-utilities" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.848917 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" containerName="extract-utilities" Oct 02 18:35:02 crc kubenswrapper[4832]: E1002 18:35:02.848929 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623543b0-0cef-4136-9b1f-7881329b4c58" containerName="extract-utilities" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.848935 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="623543b0-0cef-4136-9b1f-7881329b4c58" containerName="extract-utilities" Oct 02 18:35:02 crc kubenswrapper[4832]: E1002 18:35:02.848952 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" containerName="registry-server" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.848958 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" containerName="registry-server" Oct 02 18:35:02 crc kubenswrapper[4832]: E1002 18:35:02.848968 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623543b0-0cef-4136-9b1f-7881329b4c58" containerName="registry-server" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.848976 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="623543b0-0cef-4136-9b1f-7881329b4c58" containerName="registry-server" Oct 02 18:35:02 crc kubenswrapper[4832]: E1002 18:35:02.848987 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="623543b0-0cef-4136-9b1f-7881329b4c58" containerName="extract-content" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.848994 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="623543b0-0cef-4136-9b1f-7881329b4c58" containerName="extract-content" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.849153 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd15bc8b-af7b-40a8-b6d2-ae84051c9c19" containerName="registry-server" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.849174 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5316025d-e034-4277-9f14-2184ca0fe9a6" containerName="registry-server" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.849190 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="623543b0-0cef-4136-9b1f-7881329b4c58" containerName="registry-server" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.851198 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.859564 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 02 18:35:02 crc kubenswrapper[4832]: I1002 18:35:02.864138 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh"] Oct 02 18:35:03 crc kubenswrapper[4832]: I1002 18:35:03.014509 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-645q4\" (UniqueName: \"kubernetes.io/projected/57268deb-95d5-4987-ab26-52f11e9182b4-kube-api-access-645q4\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh\" (UID: \"57268deb-95d5-4987-ab26-52f11e9182b4\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" Oct 02 18:35:03 crc kubenswrapper[4832]: I1002 18:35:03.014766 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57268deb-95d5-4987-ab26-52f11e9182b4-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh\" (UID: \"57268deb-95d5-4987-ab26-52f11e9182b4\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" Oct 02 18:35:03 crc kubenswrapper[4832]: I1002 18:35:03.014825 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57268deb-95d5-4987-ab26-52f11e9182b4-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh\" (UID: \"57268deb-95d5-4987-ab26-52f11e9182b4\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" Oct 02 18:35:03 crc kubenswrapper[4832]: I1002 18:35:03.116533 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-645q4\" (UniqueName: \"kubernetes.io/projected/57268deb-95d5-4987-ab26-52f11e9182b4-kube-api-access-645q4\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh\" (UID: \"57268deb-95d5-4987-ab26-52f11e9182b4\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" Oct 02 18:35:03 crc kubenswrapper[4832]: I1002 18:35:03.116614 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57268deb-95d5-4987-ab26-52f11e9182b4-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh\" (UID: \"57268deb-95d5-4987-ab26-52f11e9182b4\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" Oct 02 18:35:03 crc kubenswrapper[4832]: I1002 18:35:03.116692 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57268deb-95d5-4987-ab26-52f11e9182b4-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh\" (UID: \"57268deb-95d5-4987-ab26-52f11e9182b4\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" Oct 02 18:35:03 crc kubenswrapper[4832]: I1002 18:35:03.117147 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57268deb-95d5-4987-ab26-52f11e9182b4-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh\" (UID: \"57268deb-95d5-4987-ab26-52f11e9182b4\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" Oct 02 18:35:03 crc kubenswrapper[4832]: I1002 18:35:03.117695 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57268deb-95d5-4987-ab26-52f11e9182b4-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh\" (UID: \"57268deb-95d5-4987-ab26-52f11e9182b4\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" Oct 02 18:35:03 crc kubenswrapper[4832]: I1002 18:35:03.134733 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-645q4\" (UniqueName: \"kubernetes.io/projected/57268deb-95d5-4987-ab26-52f11e9182b4-kube-api-access-645q4\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh\" (UID: \"57268deb-95d5-4987-ab26-52f11e9182b4\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" Oct 02 18:35:03 crc kubenswrapper[4832]: I1002 18:35:03.167318 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" Oct 02 18:35:03 crc kubenswrapper[4832]: I1002 18:35:03.442496 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh"] Oct 02 18:35:04 crc kubenswrapper[4832]: I1002 18:35:04.362336 4832 generic.go:334] "Generic (PLEG): container finished" podID="57268deb-95d5-4987-ab26-52f11e9182b4" containerID="4cd11435c0838ecda0ae625435c5f5022e53b78269aed972d164a284011e6676" exitCode=0 Oct 02 18:35:04 crc kubenswrapper[4832]: I1002 18:35:04.362387 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" event={"ID":"57268deb-95d5-4987-ab26-52f11e9182b4","Type":"ContainerDied","Data":"4cd11435c0838ecda0ae625435c5f5022e53b78269aed972d164a284011e6676"} Oct 02 18:35:04 crc kubenswrapper[4832]: I1002 18:35:04.362424 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" event={"ID":"57268deb-95d5-4987-ab26-52f11e9182b4","Type":"ContainerStarted","Data":"575aef07ae9c1eef403a0d1eac4d20802c8e03555b3036579f8aa3549fad6f06"} Oct 02 18:35:06 crc kubenswrapper[4832]: I1002 18:35:06.381532 4832 generic.go:334] "Generic (PLEG): container finished" podID="57268deb-95d5-4987-ab26-52f11e9182b4" containerID="88d18434f221a4a3e80993570e41c981aa3f917e650658f758fb1bbbc4aeed37" exitCode=0 Oct 02 18:35:06 crc kubenswrapper[4832]: I1002 18:35:06.381631 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" event={"ID":"57268deb-95d5-4987-ab26-52f11e9182b4","Type":"ContainerDied","Data":"88d18434f221a4a3e80993570e41c981aa3f917e650658f758fb1bbbc4aeed37"} Oct 02 18:35:07 crc kubenswrapper[4832]: I1002 18:35:07.397911 4832 generic.go:334] "Generic (PLEG): container finished" podID="57268deb-95d5-4987-ab26-52f11e9182b4" containerID="879adfd42c46201b4f9f2c3371c657a9f647f4dd13e2d8ae36d9e4b3aefabf83" exitCode=0 Oct 02 18:35:07 crc kubenswrapper[4832]: I1002 18:35:07.397988 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" event={"ID":"57268deb-95d5-4987-ab26-52f11e9182b4","Type":"ContainerDied","Data":"879adfd42c46201b4f9f2c3371c657a9f647f4dd13e2d8ae36d9e4b3aefabf83"} Oct 02 18:35:08 crc kubenswrapper[4832]: I1002 18:35:08.800653 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" Oct 02 18:35:08 crc kubenswrapper[4832]: I1002 18:35:08.914411 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-645q4\" (UniqueName: \"kubernetes.io/projected/57268deb-95d5-4987-ab26-52f11e9182b4-kube-api-access-645q4\") pod \"57268deb-95d5-4987-ab26-52f11e9182b4\" (UID: \"57268deb-95d5-4987-ab26-52f11e9182b4\") " Oct 02 18:35:08 crc kubenswrapper[4832]: I1002 18:35:08.914540 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57268deb-95d5-4987-ab26-52f11e9182b4-util\") pod \"57268deb-95d5-4987-ab26-52f11e9182b4\" (UID: \"57268deb-95d5-4987-ab26-52f11e9182b4\") " Oct 02 18:35:08 crc kubenswrapper[4832]: I1002 18:35:08.914641 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57268deb-95d5-4987-ab26-52f11e9182b4-bundle\") pod \"57268deb-95d5-4987-ab26-52f11e9182b4\" (UID: \"57268deb-95d5-4987-ab26-52f11e9182b4\") " Oct 02 18:35:08 crc kubenswrapper[4832]: I1002 18:35:08.915131 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57268deb-95d5-4987-ab26-52f11e9182b4-bundle" (OuterVolumeSpecName: "bundle") pod "57268deb-95d5-4987-ab26-52f11e9182b4" (UID: "57268deb-95d5-4987-ab26-52f11e9182b4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:35:08 crc kubenswrapper[4832]: I1002 18:35:08.920479 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57268deb-95d5-4987-ab26-52f11e9182b4-kube-api-access-645q4" (OuterVolumeSpecName: "kube-api-access-645q4") pod "57268deb-95d5-4987-ab26-52f11e9182b4" (UID: "57268deb-95d5-4987-ab26-52f11e9182b4"). InnerVolumeSpecName "kube-api-access-645q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:35:09 crc kubenswrapper[4832]: I1002 18:35:09.016992 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57268deb-95d5-4987-ab26-52f11e9182b4-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:35:09 crc kubenswrapper[4832]: I1002 18:35:09.017226 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-645q4\" (UniqueName: \"kubernetes.io/projected/57268deb-95d5-4987-ab26-52f11e9182b4-kube-api-access-645q4\") on node \"crc\" DevicePath \"\"" Oct 02 18:35:09 crc kubenswrapper[4832]: I1002 18:35:09.164670 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57268deb-95d5-4987-ab26-52f11e9182b4-util" (OuterVolumeSpecName: "util") pod "57268deb-95d5-4987-ab26-52f11e9182b4" (UID: "57268deb-95d5-4987-ab26-52f11e9182b4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:35:09 crc kubenswrapper[4832]: I1002 18:35:09.220493 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57268deb-95d5-4987-ab26-52f11e9182b4-util\") on node \"crc\" DevicePath \"\"" Oct 02 18:35:09 crc kubenswrapper[4832]: I1002 18:35:09.418465 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" event={"ID":"57268deb-95d5-4987-ab26-52f11e9182b4","Type":"ContainerDied","Data":"575aef07ae9c1eef403a0d1eac4d20802c8e03555b3036579f8aa3549fad6f06"} Oct 02 18:35:09 crc kubenswrapper[4832]: I1002 18:35:09.418527 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="575aef07ae9c1eef403a0d1eac4d20802c8e03555b3036579f8aa3549fad6f06" Oct 02 18:35:09 crc kubenswrapper[4832]: I1002 18:35:09.418597 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh" Oct 02 18:35:13 crc kubenswrapper[4832]: I1002 18:35:13.475591 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-llnvs"] Oct 02 18:35:13 crc kubenswrapper[4832]: E1002 18:35:13.476132 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57268deb-95d5-4987-ab26-52f11e9182b4" containerName="pull" Oct 02 18:35:13 crc kubenswrapper[4832]: I1002 18:35:13.476146 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="57268deb-95d5-4987-ab26-52f11e9182b4" containerName="pull" Oct 02 18:35:13 crc kubenswrapper[4832]: E1002 18:35:13.476164 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57268deb-95d5-4987-ab26-52f11e9182b4" containerName="extract" Oct 02 18:35:13 crc kubenswrapper[4832]: I1002 18:35:13.476173 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="57268deb-95d5-4987-ab26-52f11e9182b4" containerName="extract" Oct 02 18:35:13 crc kubenswrapper[4832]: E1002 18:35:13.476193 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57268deb-95d5-4987-ab26-52f11e9182b4" containerName="util" Oct 02 18:35:13 crc kubenswrapper[4832]: I1002 18:35:13.476202 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="57268deb-95d5-4987-ab26-52f11e9182b4" containerName="util" Oct 02 18:35:13 crc kubenswrapper[4832]: I1002 18:35:13.476413 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="57268deb-95d5-4987-ab26-52f11e9182b4" containerName="extract" Oct 02 18:35:13 crc kubenswrapper[4832]: I1002 18:35:13.476988 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-llnvs" Oct 02 18:35:13 crc kubenswrapper[4832]: I1002 18:35:13.478823 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-wzhl7" Oct 02 18:35:13 crc kubenswrapper[4832]: I1002 18:35:13.479390 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 02 18:35:13 crc kubenswrapper[4832]: I1002 18:35:13.479480 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 02 18:35:13 crc kubenswrapper[4832]: I1002 18:35:13.487412 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-llnvs"] Oct 02 18:35:13 crc kubenswrapper[4832]: I1002 18:35:13.609765 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mghhf\" (UniqueName: \"kubernetes.io/projected/c2e56bcb-1ff3-4c1f-8353-b84a573d23b2-kube-api-access-mghhf\") pod \"nmstate-operator-858ddd8f98-llnvs\" (UID: \"c2e56bcb-1ff3-4c1f-8353-b84a573d23b2\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-llnvs" Oct 02 18:35:13 crc kubenswrapper[4832]: I1002 18:35:13.711046 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mghhf\" (UniqueName: \"kubernetes.io/projected/c2e56bcb-1ff3-4c1f-8353-b84a573d23b2-kube-api-access-mghhf\") pod \"nmstate-operator-858ddd8f98-llnvs\" (UID: \"c2e56bcb-1ff3-4c1f-8353-b84a573d23b2\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-llnvs" Oct 02 18:35:13 crc kubenswrapper[4832]: I1002 18:35:13.740900 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mghhf\" (UniqueName: \"kubernetes.io/projected/c2e56bcb-1ff3-4c1f-8353-b84a573d23b2-kube-api-access-mghhf\") pod \"nmstate-operator-858ddd8f98-llnvs\" (UID: \"c2e56bcb-1ff3-4c1f-8353-b84a573d23b2\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-llnvs" Oct 02 18:35:13 crc kubenswrapper[4832]: I1002 18:35:13.796429 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-llnvs" Oct 02 18:35:14 crc kubenswrapper[4832]: I1002 18:35:14.228049 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-llnvs"] Oct 02 18:35:14 crc kubenswrapper[4832]: I1002 18:35:14.455151 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-llnvs" event={"ID":"c2e56bcb-1ff3-4c1f-8353-b84a573d23b2","Type":"ContainerStarted","Data":"7752f2799857bdb30b5310f496d138eb19659584431e556be6b2dfb4bd6f4e0f"} Oct 02 18:35:17 crc kubenswrapper[4832]: I1002 18:35:17.485196 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-llnvs" event={"ID":"c2e56bcb-1ff3-4c1f-8353-b84a573d23b2","Type":"ContainerStarted","Data":"fad961f1b87641c9e2744e328ab84071889d9d897eb5e6073571608949f77528"} Oct 02 18:35:17 crc kubenswrapper[4832]: I1002 18:35:17.504078 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-llnvs" podStartSLOduration=2.393543625 podStartE2EDuration="4.504054197s" podCreationTimestamp="2025-10-02 18:35:13 +0000 UTC" firstStartedPulling="2025-10-02 18:35:14.234958521 +0000 UTC m=+871.204401403" lastFinishedPulling="2025-10-02 18:35:16.345469083 +0000 UTC m=+873.314911975" observedRunningTime="2025-10-02 18:35:17.500773274 +0000 UTC m=+874.470216186" watchObservedRunningTime="2025-10-02 18:35:17.504054197 +0000 UTC m=+874.473497109" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.159469 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-vldj4"] Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.161502 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-vldj4" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.164819 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-l45jd" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.170667 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-vldj4"] Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.177709 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm"] Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.180804 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.183903 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.207397 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-g62zg"] Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.208469 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.217728 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm"] Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.298075 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c558f\" (UniqueName: \"kubernetes.io/projected/d6480df8-e541-45f9-b397-d6abe2be00d3-kube-api-access-c558f\") pod \"nmstate-webhook-6cdbc54649-wp5dm\" (UID: \"d6480df8-e541-45f9-b397-d6abe2be00d3\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.298154 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/477c57db-3df8-4587-abf7-ef94e8c4ad69-ovs-socket\") pod \"nmstate-handler-g62zg\" (UID: \"477c57db-3df8-4587-abf7-ef94e8c4ad69\") " pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.298209 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57xnj\" (UniqueName: \"kubernetes.io/projected/477c57db-3df8-4587-abf7-ef94e8c4ad69-kube-api-access-57xnj\") pod \"nmstate-handler-g62zg\" (UID: \"477c57db-3df8-4587-abf7-ef94e8c4ad69\") " pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.298282 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d6l7\" (UniqueName: \"kubernetes.io/projected/679a35a4-780b-431c-bb41-37763bf32d80-kube-api-access-6d6l7\") pod \"nmstate-metrics-fdff9cb8d-vldj4\" (UID: \"679a35a4-780b-431c-bb41-37763bf32d80\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-vldj4" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.298306 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d6480df8-e541-45f9-b397-d6abe2be00d3-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-wp5dm\" (UID: \"d6480df8-e541-45f9-b397-d6abe2be00d3\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.298378 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/477c57db-3df8-4587-abf7-ef94e8c4ad69-dbus-socket\") pod \"nmstate-handler-g62zg\" (UID: \"477c57db-3df8-4587-abf7-ef94e8c4ad69\") " pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.298401 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/477c57db-3df8-4587-abf7-ef94e8c4ad69-nmstate-lock\") pod \"nmstate-handler-g62zg\" (UID: \"477c57db-3df8-4587-abf7-ef94e8c4ad69\") " pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.304924 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7"] Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.305816 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.308724 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ws44s" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.308894 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.309508 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.319557 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7"] Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.399588 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/477c57db-3df8-4587-abf7-ef94e8c4ad69-ovs-socket\") pod \"nmstate-handler-g62zg\" (UID: \"477c57db-3df8-4587-abf7-ef94e8c4ad69\") " pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.399648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xnj\" (UniqueName: \"kubernetes.io/projected/477c57db-3df8-4587-abf7-ef94e8c4ad69-kube-api-access-57xnj\") pod \"nmstate-handler-g62zg\" (UID: \"477c57db-3df8-4587-abf7-ef94e8c4ad69\") " pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.399692 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkmn5\" (UniqueName: \"kubernetes.io/projected/0c339642-1f25-4795-a62a-2db5045984cb-kube-api-access-bkmn5\") pod \"nmstate-console-plugin-6b874cbd85-mklc7\" (UID: \"0c339642-1f25-4795-a62a-2db5045984cb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.399713 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/477c57db-3df8-4587-abf7-ef94e8c4ad69-ovs-socket\") pod \"nmstate-handler-g62zg\" (UID: \"477c57db-3df8-4587-abf7-ef94e8c4ad69\") " pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.399790 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d6l7\" (UniqueName: \"kubernetes.io/projected/679a35a4-780b-431c-bb41-37763bf32d80-kube-api-access-6d6l7\") pod \"nmstate-metrics-fdff9cb8d-vldj4\" (UID: \"679a35a4-780b-431c-bb41-37763bf32d80\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-vldj4" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.399813 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d6480df8-e541-45f9-b397-d6abe2be00d3-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-wp5dm\" (UID: \"d6480df8-e541-45f9-b397-d6abe2be00d3\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.399849 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c339642-1f25-4795-a62a-2db5045984cb-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-mklc7\" (UID: \"0c339642-1f25-4795-a62a-2db5045984cb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.399871 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/477c57db-3df8-4587-abf7-ef94e8c4ad69-dbus-socket\") pod \"nmstate-handler-g62zg\" (UID: \"477c57db-3df8-4587-abf7-ef94e8c4ad69\") " pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.399891 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/477c57db-3df8-4587-abf7-ef94e8c4ad69-nmstate-lock\") pod \"nmstate-handler-g62zg\" (UID: \"477c57db-3df8-4587-abf7-ef94e8c4ad69\") " pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.399908 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0c339642-1f25-4795-a62a-2db5045984cb-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-mklc7\" (UID: \"0c339642-1f25-4795-a62a-2db5045984cb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.399938 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c558f\" (UniqueName: \"kubernetes.io/projected/d6480df8-e541-45f9-b397-d6abe2be00d3-kube-api-access-c558f\") pod \"nmstate-webhook-6cdbc54649-wp5dm\" (UID: \"d6480df8-e541-45f9-b397-d6abe2be00d3\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm" Oct 02 18:35:24 crc kubenswrapper[4832]: E1002 18:35:24.400211 4832 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Oct 02 18:35:24 crc kubenswrapper[4832]: E1002 18:35:24.400287 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6480df8-e541-45f9-b397-d6abe2be00d3-tls-key-pair podName:d6480df8-e541-45f9-b397-d6abe2be00d3 nodeName:}" failed. No retries permitted until 2025-10-02 18:35:24.900253378 +0000 UTC m=+881.869696250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/d6480df8-e541-45f9-b397-d6abe2be00d3-tls-key-pair") pod "nmstate-webhook-6cdbc54649-wp5dm" (UID: "d6480df8-e541-45f9-b397-d6abe2be00d3") : secret "openshift-nmstate-webhook" not found Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.400307 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/477c57db-3df8-4587-abf7-ef94e8c4ad69-nmstate-lock\") pod \"nmstate-handler-g62zg\" (UID: \"477c57db-3df8-4587-abf7-ef94e8c4ad69\") " pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.400581 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/477c57db-3df8-4587-abf7-ef94e8c4ad69-dbus-socket\") pod \"nmstate-handler-g62zg\" (UID: \"477c57db-3df8-4587-abf7-ef94e8c4ad69\") " pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.419095 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c558f\" (UniqueName: \"kubernetes.io/projected/d6480df8-e541-45f9-b397-d6abe2be00d3-kube-api-access-c558f\") pod \"nmstate-webhook-6cdbc54649-wp5dm\" (UID: \"d6480df8-e541-45f9-b397-d6abe2be00d3\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.419726 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57xnj\" (UniqueName: \"kubernetes.io/projected/477c57db-3df8-4587-abf7-ef94e8c4ad69-kube-api-access-57xnj\") pod \"nmstate-handler-g62zg\" (UID: \"477c57db-3df8-4587-abf7-ef94e8c4ad69\") " pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.423126 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d6l7\" (UniqueName: \"kubernetes.io/projected/679a35a4-780b-431c-bb41-37763bf32d80-kube-api-access-6d6l7\") pod \"nmstate-metrics-fdff9cb8d-vldj4\" (UID: \"679a35a4-780b-431c-bb41-37763bf32d80\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-vldj4" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.480471 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-vldj4" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.501455 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkmn5\" (UniqueName: \"kubernetes.io/projected/0c339642-1f25-4795-a62a-2db5045984cb-kube-api-access-bkmn5\") pod \"nmstate-console-plugin-6b874cbd85-mklc7\" (UID: \"0c339642-1f25-4795-a62a-2db5045984cb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.501797 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c339642-1f25-4795-a62a-2db5045984cb-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-mklc7\" (UID: \"0c339642-1f25-4795-a62a-2db5045984cb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.501859 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0c339642-1f25-4795-a62a-2db5045984cb-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-mklc7\" (UID: \"0c339642-1f25-4795-a62a-2db5045984cb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.502814 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/0c339642-1f25-4795-a62a-2db5045984cb-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-mklc7\" (UID: \"0c339642-1f25-4795-a62a-2db5045984cb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.506221 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c339642-1f25-4795-a62a-2db5045984cb-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-mklc7\" (UID: \"0c339642-1f25-4795-a62a-2db5045984cb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.510556 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-cf69dd54d-z6zmc"] Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.511430 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.521853 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-cf69dd54d-z6zmc"] Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.529127 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkmn5\" (UniqueName: \"kubernetes.io/projected/0c339642-1f25-4795-a62a-2db5045984cb-kube-api-access-bkmn5\") pod \"nmstate-console-plugin-6b874cbd85-mklc7\" (UID: \"0c339642-1f25-4795-a62a-2db5045984cb\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.532171 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.602931 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-service-ca\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.602969 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-oauth-serving-cert\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.602995 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-config\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.603057 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-serving-cert\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.603091 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-oauth-config\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.603134 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7275v\" (UniqueName: \"kubernetes.io/projected/ea039358-89f3-4cab-a81f-77dbdbd6e667-kube-api-access-7275v\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.603151 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-trusted-ca-bundle\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.625282 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.704819 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7275v\" (UniqueName: \"kubernetes.io/projected/ea039358-89f3-4cab-a81f-77dbdbd6e667-kube-api-access-7275v\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.704864 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-trusted-ca-bundle\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.704905 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-service-ca\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.704921 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-oauth-serving-cert\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.704942 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-config\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.704995 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-serving-cert\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.705029 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-oauth-config\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.709467 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-service-ca\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.709722 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-config\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.709847 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-oauth-serving-cert\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.710491 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-trusted-ca-bundle\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.711166 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-serving-cert\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.714325 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-oauth-config\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.772102 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7275v\" (UniqueName: \"kubernetes.io/projected/ea039358-89f3-4cab-a81f-77dbdbd6e667-kube-api-access-7275v\") pod \"console-cf69dd54d-z6zmc\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.884601 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.910568 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d6480df8-e541-45f9-b397-d6abe2be00d3-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-wp5dm\" (UID: \"d6480df8-e541-45f9-b397-d6abe2be00d3\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.913794 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d6480df8-e541-45f9-b397-d6abe2be00d3-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-wp5dm\" (UID: \"d6480df8-e541-45f9-b397-d6abe2be00d3\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm" Oct 02 18:35:24 crc kubenswrapper[4832]: I1002 18:35:24.972702 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-vldj4"] Oct 02 18:35:24 crc kubenswrapper[4832]: W1002 18:35:24.977994 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod679a35a4_780b_431c_bb41_37763bf32d80.slice/crio-97ea4ecbfaa62ebed6a6b59eaa5a9416762538e8497d3651b2c1e6940657d6cc WatchSource:0}: Error finding container 97ea4ecbfaa62ebed6a6b59eaa5a9416762538e8497d3651b2c1e6940657d6cc: Status 404 returned error can't find the container with id 97ea4ecbfaa62ebed6a6b59eaa5a9416762538e8497d3651b2c1e6940657d6cc Oct 02 18:35:25 crc kubenswrapper[4832]: I1002 18:35:25.100696 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm" Oct 02 18:35:25 crc kubenswrapper[4832]: I1002 18:35:25.213897 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7"] Oct 02 18:35:25 crc kubenswrapper[4832]: W1002 18:35:25.220911 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c339642_1f25_4795_a62a_2db5045984cb.slice/crio-3e13ba282c36556a85d446d6eb8fc8487313c86ac4e1c87344309357e168dcf7 WatchSource:0}: Error finding container 3e13ba282c36556a85d446d6eb8fc8487313c86ac4e1c87344309357e168dcf7: Status 404 returned error can't find the container with id 3e13ba282c36556a85d446d6eb8fc8487313c86ac4e1c87344309357e168dcf7 Oct 02 18:35:25 crc kubenswrapper[4832]: I1002 18:35:25.332815 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-cf69dd54d-z6zmc"] Oct 02 18:35:25 crc kubenswrapper[4832]: W1002 18:35:25.339013 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea039358_89f3_4cab_a81f_77dbdbd6e667.slice/crio-bd40e9a359b9a50745d2cec8ed876a744ea5b4f16f1ae137ac02b8b1e805d9eb WatchSource:0}: Error finding container bd40e9a359b9a50745d2cec8ed876a744ea5b4f16f1ae137ac02b8b1e805d9eb: Status 404 returned error can't find the container with id bd40e9a359b9a50745d2cec8ed876a744ea5b4f16f1ae137ac02b8b1e805d9eb Oct 02 18:35:25 crc kubenswrapper[4832]: I1002 18:35:25.521706 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm"] Oct 02 18:35:25 crc kubenswrapper[4832]: I1002 18:35:25.551043 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cf69dd54d-z6zmc" event={"ID":"ea039358-89f3-4cab-a81f-77dbdbd6e667","Type":"ContainerStarted","Data":"451c4ed9951e1b81154922c3a20b1d06a62b8589f785cc1f722a5941ad65fe55"} Oct 02 18:35:25 crc kubenswrapper[4832]: I1002 18:35:25.551087 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cf69dd54d-z6zmc" event={"ID":"ea039358-89f3-4cab-a81f-77dbdbd6e667","Type":"ContainerStarted","Data":"bd40e9a359b9a50745d2cec8ed876a744ea5b4f16f1ae137ac02b8b1e805d9eb"} Oct 02 18:35:25 crc kubenswrapper[4832]: I1002 18:35:25.553035 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7" event={"ID":"0c339642-1f25-4795-a62a-2db5045984cb","Type":"ContainerStarted","Data":"3e13ba282c36556a85d446d6eb8fc8487313c86ac4e1c87344309357e168dcf7"} Oct 02 18:35:25 crc kubenswrapper[4832]: I1002 18:35:25.554254 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm" event={"ID":"d6480df8-e541-45f9-b397-d6abe2be00d3","Type":"ContainerStarted","Data":"938504f59e0ad01f6b891aa24fd201963ce7df44f2d629dfc7783495f0d04148"} Oct 02 18:35:25 crc kubenswrapper[4832]: I1002 18:35:25.557188 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-g62zg" event={"ID":"477c57db-3df8-4587-abf7-ef94e8c4ad69","Type":"ContainerStarted","Data":"92a32e7dc06ea16d61daf8e2fdb0717883f831039103a637d30ab3dfd7750b18"} Oct 02 18:35:25 crc kubenswrapper[4832]: I1002 18:35:25.558122 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-vldj4" event={"ID":"679a35a4-780b-431c-bb41-37763bf32d80","Type":"ContainerStarted","Data":"97ea4ecbfaa62ebed6a6b59eaa5a9416762538e8497d3651b2c1e6940657d6cc"} Oct 02 18:35:25 crc kubenswrapper[4832]: I1002 18:35:25.567374 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-cf69dd54d-z6zmc" podStartSLOduration=1.567361521 podStartE2EDuration="1.567361521s" podCreationTimestamp="2025-10-02 18:35:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:35:25.567048481 +0000 UTC m=+882.536491343" watchObservedRunningTime="2025-10-02 18:35:25.567361521 +0000 UTC m=+882.536804393" Oct 02 18:35:28 crc kubenswrapper[4832]: I1002 18:35:28.594448 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-g62zg" event={"ID":"477c57db-3df8-4587-abf7-ef94e8c4ad69","Type":"ContainerStarted","Data":"5af5fa558bbe7f45afc15a81cb11f2898ea78e8662df6f72e46ce984250c8d3d"} Oct 02 18:35:28 crc kubenswrapper[4832]: I1002 18:35:28.594879 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:28 crc kubenswrapper[4832]: I1002 18:35:28.597176 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-vldj4" event={"ID":"679a35a4-780b-431c-bb41-37763bf32d80","Type":"ContainerStarted","Data":"db22749e80f0628b2e858c82984586547f211a7147770a00ee3ce38f683ae001"} Oct 02 18:35:28 crc kubenswrapper[4832]: I1002 18:35:28.598581 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7" event={"ID":"0c339642-1f25-4795-a62a-2db5045984cb","Type":"ContainerStarted","Data":"37429aa17c3d0c31ee50789c4cf4ef5bb7ff227e6b9f4a0b50cf7772bae2d89c"} Oct 02 18:35:28 crc kubenswrapper[4832]: I1002 18:35:28.601380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm" event={"ID":"d6480df8-e541-45f9-b397-d6abe2be00d3","Type":"ContainerStarted","Data":"98c1eda98da70e3bc19440e078c92c572ecf6873b1426de8427f261de5a51948"} Oct 02 18:35:28 crc kubenswrapper[4832]: I1002 18:35:28.601556 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm" Oct 02 18:35:28 crc kubenswrapper[4832]: I1002 18:35:28.612320 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-g62zg" podStartSLOduration=1.246781366 podStartE2EDuration="4.612304779s" podCreationTimestamp="2025-10-02 18:35:24 +0000 UTC" firstStartedPulling="2025-10-02 18:35:24.563874406 +0000 UTC m=+881.533317278" lastFinishedPulling="2025-10-02 18:35:27.929397779 +0000 UTC m=+884.898840691" observedRunningTime="2025-10-02 18:35:28.609836751 +0000 UTC m=+885.579279623" watchObservedRunningTime="2025-10-02 18:35:28.612304779 +0000 UTC m=+885.581747651" Oct 02 18:35:28 crc kubenswrapper[4832]: I1002 18:35:28.633076 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-mklc7" podStartSLOduration=1.928680205 podStartE2EDuration="4.6330542s" podCreationTimestamp="2025-10-02 18:35:24 +0000 UTC" firstStartedPulling="2025-10-02 18:35:25.224010992 +0000 UTC m=+882.193453864" lastFinishedPulling="2025-10-02 18:35:27.928384987 +0000 UTC m=+884.897827859" observedRunningTime="2025-10-02 18:35:28.622762887 +0000 UTC m=+885.592205759" watchObservedRunningTime="2025-10-02 18:35:28.6330542 +0000 UTC m=+885.602497072" Oct 02 18:35:28 crc kubenswrapper[4832]: I1002 18:35:28.642654 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm" podStartSLOduration=2.256136354 podStartE2EDuration="4.642638281s" podCreationTimestamp="2025-10-02 18:35:24 +0000 UTC" firstStartedPulling="2025-10-02 18:35:25.544460171 +0000 UTC m=+882.513903043" lastFinishedPulling="2025-10-02 18:35:27.930962078 +0000 UTC m=+884.900404970" observedRunningTime="2025-10-02 18:35:28.640365121 +0000 UTC m=+885.609807993" watchObservedRunningTime="2025-10-02 18:35:28.642638281 +0000 UTC m=+885.612081153" Oct 02 18:35:31 crc kubenswrapper[4832]: I1002 18:35:31.638002 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-vldj4" event={"ID":"679a35a4-780b-431c-bb41-37763bf32d80","Type":"ContainerStarted","Data":"04218b61697e40e4a7e098712299d3444b81ff68bc928f27fa6a3668ae82790a"} Oct 02 18:35:31 crc kubenswrapper[4832]: I1002 18:35:31.665215 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-vldj4" podStartSLOduration=1.782895608 podStartE2EDuration="7.665186927s" podCreationTimestamp="2025-10-02 18:35:24 +0000 UTC" firstStartedPulling="2025-10-02 18:35:24.984361777 +0000 UTC m=+881.953804649" lastFinishedPulling="2025-10-02 18:35:30.866653096 +0000 UTC m=+887.836095968" observedRunningTime="2025-10-02 18:35:31.663756362 +0000 UTC m=+888.633199274" watchObservedRunningTime="2025-10-02 18:35:31.665186927 +0000 UTC m=+888.634629829" Oct 02 18:35:34 crc kubenswrapper[4832]: I1002 18:35:34.574942 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-g62zg" Oct 02 18:35:34 crc kubenswrapper[4832]: I1002 18:35:34.885293 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:34 crc kubenswrapper[4832]: I1002 18:35:34.885354 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:34 crc kubenswrapper[4832]: I1002 18:35:34.892717 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:35 crc kubenswrapper[4832]: I1002 18:35:35.715381 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:35:35 crc kubenswrapper[4832]: I1002 18:35:35.838896 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-949d8b8f8-m2cbs"] Oct 02 18:35:45 crc kubenswrapper[4832]: I1002 18:35:45.112138 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-wp5dm" Oct 02 18:36:00 crc kubenswrapper[4832]: I1002 18:36:00.901399 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-949d8b8f8-m2cbs" podUID="7ae2edd9-becf-44a4-aa8f-0951901a89c4" containerName="console" containerID="cri-o://2376e04b10daf7492986cedab6d5ad5671363580201e661fa07aac1abb74cd9f" gracePeriod=15 Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.379391 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-949d8b8f8-m2cbs_7ae2edd9-becf-44a4-aa8f-0951901a89c4/console/0.log" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.380045 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.490954 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-config\") pod \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.491317 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-serving-cert\") pod \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.491354 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-service-ca\") pod \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.491386 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-oauth-config\") pod \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.491435 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5n8b\" (UniqueName: \"kubernetes.io/projected/7ae2edd9-becf-44a4-aa8f-0951901a89c4-kube-api-access-n5n8b\") pod \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.492075 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-service-ca" (OuterVolumeSpecName: "service-ca") pod "7ae2edd9-becf-44a4-aa8f-0951901a89c4" (UID: "7ae2edd9-becf-44a4-aa8f-0951901a89c4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.492298 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-oauth-serving-cert\") pod \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.492340 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-trusted-ca-bundle\") pod \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\" (UID: \"7ae2edd9-becf-44a4-aa8f-0951901a89c4\") " Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.492535 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-config" (OuterVolumeSpecName: "console-config") pod "7ae2edd9-becf-44a4-aa8f-0951901a89c4" (UID: "7ae2edd9-becf-44a4-aa8f-0951901a89c4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.492765 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7ae2edd9-becf-44a4-aa8f-0951901a89c4" (UID: "7ae2edd9-becf-44a4-aa8f-0951901a89c4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.493147 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7ae2edd9-becf-44a4-aa8f-0951901a89c4" (UID: "7ae2edd9-becf-44a4-aa8f-0951901a89c4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.493384 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.493407 4832 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.493426 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.493439 4832 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.497096 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae2edd9-becf-44a4-aa8f-0951901a89c4-kube-api-access-n5n8b" (OuterVolumeSpecName: "kube-api-access-n5n8b") pod "7ae2edd9-becf-44a4-aa8f-0951901a89c4" (UID: "7ae2edd9-becf-44a4-aa8f-0951901a89c4"). InnerVolumeSpecName "kube-api-access-n5n8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.497945 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7ae2edd9-becf-44a4-aa8f-0951901a89c4" (UID: "7ae2edd9-becf-44a4-aa8f-0951901a89c4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.498356 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7ae2edd9-becf-44a4-aa8f-0951901a89c4" (UID: "7ae2edd9-becf-44a4-aa8f-0951901a89c4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.595063 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5n8b\" (UniqueName: \"kubernetes.io/projected/7ae2edd9-becf-44a4-aa8f-0951901a89c4-kube-api-access-n5n8b\") on node \"crc\" DevicePath \"\"" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.595100 4832 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.595112 4832 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7ae2edd9-becf-44a4-aa8f-0951901a89c4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.942153 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-949d8b8f8-m2cbs_7ae2edd9-becf-44a4-aa8f-0951901a89c4/console/0.log" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.942602 4832 generic.go:334] "Generic (PLEG): container finished" podID="7ae2edd9-becf-44a4-aa8f-0951901a89c4" containerID="2376e04b10daf7492986cedab6d5ad5671363580201e661fa07aac1abb74cd9f" exitCode=2 Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.942645 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-949d8b8f8-m2cbs" event={"ID":"7ae2edd9-becf-44a4-aa8f-0951901a89c4","Type":"ContainerDied","Data":"2376e04b10daf7492986cedab6d5ad5671363580201e661fa07aac1abb74cd9f"} Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.942697 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-949d8b8f8-m2cbs" event={"ID":"7ae2edd9-becf-44a4-aa8f-0951901a89c4","Type":"ContainerDied","Data":"381e8b8e88892b82401f765931b301ae1fe986295243aaa0d0b7a4cd8e54c5b3"} Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.942716 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-949d8b8f8-m2cbs" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.942735 4832 scope.go:117] "RemoveContainer" containerID="2376e04b10daf7492986cedab6d5ad5671363580201e661fa07aac1abb74cd9f" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.972848 4832 scope.go:117] "RemoveContainer" containerID="2376e04b10daf7492986cedab6d5ad5671363580201e661fa07aac1abb74cd9f" Oct 02 18:36:01 crc kubenswrapper[4832]: E1002 18:36:01.974040 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2376e04b10daf7492986cedab6d5ad5671363580201e661fa07aac1abb74cd9f\": container with ID starting with 2376e04b10daf7492986cedab6d5ad5671363580201e661fa07aac1abb74cd9f not found: ID does not exist" containerID="2376e04b10daf7492986cedab6d5ad5671363580201e661fa07aac1abb74cd9f" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.974109 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2376e04b10daf7492986cedab6d5ad5671363580201e661fa07aac1abb74cd9f"} err="failed to get container status \"2376e04b10daf7492986cedab6d5ad5671363580201e661fa07aac1abb74cd9f\": rpc error: code = NotFound desc = could not find container \"2376e04b10daf7492986cedab6d5ad5671363580201e661fa07aac1abb74cd9f\": container with ID starting with 2376e04b10daf7492986cedab6d5ad5671363580201e661fa07aac1abb74cd9f not found: ID does not exist" Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.989836 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-949d8b8f8-m2cbs"] Oct 02 18:36:01 crc kubenswrapper[4832]: I1002 18:36:01.994515 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-949d8b8f8-m2cbs"] Oct 02 18:36:03 crc kubenswrapper[4832]: I1002 18:36:03.232552 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae2edd9-becf-44a4-aa8f-0951901a89c4" path="/var/lib/kubelet/pods/7ae2edd9-becf-44a4-aa8f-0951901a89c4/volumes" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.311712 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9"] Oct 02 18:36:06 crc kubenswrapper[4832]: E1002 18:36:06.312475 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae2edd9-becf-44a4-aa8f-0951901a89c4" containerName="console" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.312491 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae2edd9-becf-44a4-aa8f-0951901a89c4" containerName="console" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.312654 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae2edd9-becf-44a4-aa8f-0951901a89c4" containerName="console" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.313921 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.315755 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.323313 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9"] Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.377486 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d90d357-6ff7-497d-a6c5-2dbd6af40493-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9\" (UID: \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.377536 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d90d357-6ff7-497d-a6c5-2dbd6af40493-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9\" (UID: \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.377652 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpflv\" (UniqueName: \"kubernetes.io/projected/1d90d357-6ff7-497d-a6c5-2dbd6af40493-kube-api-access-kpflv\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9\" (UID: \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.479445 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d90d357-6ff7-497d-a6c5-2dbd6af40493-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9\" (UID: \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.479502 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d90d357-6ff7-497d-a6c5-2dbd6af40493-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9\" (UID: \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.479544 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpflv\" (UniqueName: \"kubernetes.io/projected/1d90d357-6ff7-497d-a6c5-2dbd6af40493-kube-api-access-kpflv\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9\" (UID: \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.479957 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d90d357-6ff7-497d-a6c5-2dbd6af40493-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9\" (UID: \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.480222 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d90d357-6ff7-497d-a6c5-2dbd6af40493-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9\" (UID: \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.497960 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpflv\" (UniqueName: \"kubernetes.io/projected/1d90d357-6ff7-497d-a6c5-2dbd6af40493-kube-api-access-kpflv\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9\" (UID: \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.628082 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.904264 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9"] Oct 02 18:36:06 crc kubenswrapper[4832]: I1002 18:36:06.980365 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" event={"ID":"1d90d357-6ff7-497d-a6c5-2dbd6af40493","Type":"ContainerStarted","Data":"c53a2ec9d24bf77ff30209de4a40ff9c108cb4dbd850e026b8247819d33fcd82"} Oct 02 18:36:07 crc kubenswrapper[4832]: I1002 18:36:07.989541 4832 generic.go:334] "Generic (PLEG): container finished" podID="1d90d357-6ff7-497d-a6c5-2dbd6af40493" containerID="c501090ebec30dabc712b3baed28992f09e96e0709913beb537d38c4187717ca" exitCode=0 Oct 02 18:36:07 crc kubenswrapper[4832]: I1002 18:36:07.989582 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" event={"ID":"1d90d357-6ff7-497d-a6c5-2dbd6af40493","Type":"ContainerDied","Data":"c501090ebec30dabc712b3baed28992f09e96e0709913beb537d38c4187717ca"} Oct 02 18:36:07 crc kubenswrapper[4832]: I1002 18:36:07.992204 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 18:36:10 crc kubenswrapper[4832]: I1002 18:36:10.003245 4832 generic.go:334] "Generic (PLEG): container finished" podID="1d90d357-6ff7-497d-a6c5-2dbd6af40493" containerID="e891818658a6ee6640a74fadaa44c110455dcb37bf1342301e164ed3804944c8" exitCode=0 Oct 02 18:36:10 crc kubenswrapper[4832]: I1002 18:36:10.003333 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" event={"ID":"1d90d357-6ff7-497d-a6c5-2dbd6af40493","Type":"ContainerDied","Data":"e891818658a6ee6640a74fadaa44c110455dcb37bf1342301e164ed3804944c8"} Oct 02 18:36:11 crc kubenswrapper[4832]: I1002 18:36:11.012825 4832 generic.go:334] "Generic (PLEG): container finished" podID="1d90d357-6ff7-497d-a6c5-2dbd6af40493" containerID="42126fc8f5fe80e49c0483ebc4f953f10e8e5f314b9416ecf0e65aa9975dd412" exitCode=0 Oct 02 18:36:11 crc kubenswrapper[4832]: I1002 18:36:11.012873 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" event={"ID":"1d90d357-6ff7-497d-a6c5-2dbd6af40493","Type":"ContainerDied","Data":"42126fc8f5fe80e49c0483ebc4f953f10e8e5f314b9416ecf0e65aa9975dd412"} Oct 02 18:36:12 crc kubenswrapper[4832]: I1002 18:36:12.395934 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" Oct 02 18:36:12 crc kubenswrapper[4832]: I1002 18:36:12.486556 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d90d357-6ff7-497d-a6c5-2dbd6af40493-util\") pod \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\" (UID: \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\") " Oct 02 18:36:12 crc kubenswrapper[4832]: I1002 18:36:12.486846 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpflv\" (UniqueName: \"kubernetes.io/projected/1d90d357-6ff7-497d-a6c5-2dbd6af40493-kube-api-access-kpflv\") pod \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\" (UID: \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\") " Oct 02 18:36:12 crc kubenswrapper[4832]: I1002 18:36:12.486951 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d90d357-6ff7-497d-a6c5-2dbd6af40493-bundle\") pod \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\" (UID: \"1d90d357-6ff7-497d-a6c5-2dbd6af40493\") " Oct 02 18:36:12 crc kubenswrapper[4832]: I1002 18:36:12.488094 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d90d357-6ff7-497d-a6c5-2dbd6af40493-bundle" (OuterVolumeSpecName: "bundle") pod "1d90d357-6ff7-497d-a6c5-2dbd6af40493" (UID: "1d90d357-6ff7-497d-a6c5-2dbd6af40493"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:36:12 crc kubenswrapper[4832]: I1002 18:36:12.497429 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d90d357-6ff7-497d-a6c5-2dbd6af40493-kube-api-access-kpflv" (OuterVolumeSpecName: "kube-api-access-kpflv") pod "1d90d357-6ff7-497d-a6c5-2dbd6af40493" (UID: "1d90d357-6ff7-497d-a6c5-2dbd6af40493"). InnerVolumeSpecName "kube-api-access-kpflv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:36:12 crc kubenswrapper[4832]: I1002 18:36:12.589636 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpflv\" (UniqueName: \"kubernetes.io/projected/1d90d357-6ff7-497d-a6c5-2dbd6af40493-kube-api-access-kpflv\") on node \"crc\" DevicePath \"\"" Oct 02 18:36:12 crc kubenswrapper[4832]: I1002 18:36:12.589907 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1d90d357-6ff7-497d-a6c5-2dbd6af40493-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:36:12 crc kubenswrapper[4832]: I1002 18:36:12.695183 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d90d357-6ff7-497d-a6c5-2dbd6af40493-util" (OuterVolumeSpecName: "util") pod "1d90d357-6ff7-497d-a6c5-2dbd6af40493" (UID: "1d90d357-6ff7-497d-a6c5-2dbd6af40493"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:36:12 crc kubenswrapper[4832]: I1002 18:36:12.792672 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1d90d357-6ff7-497d-a6c5-2dbd6af40493-util\") on node \"crc\" DevicePath \"\"" Oct 02 18:36:13 crc kubenswrapper[4832]: I1002 18:36:13.032996 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" event={"ID":"1d90d357-6ff7-497d-a6c5-2dbd6af40493","Type":"ContainerDied","Data":"c53a2ec9d24bf77ff30209de4a40ff9c108cb4dbd850e026b8247819d33fcd82"} Oct 02 18:36:13 crc kubenswrapper[4832]: I1002 18:36:13.033052 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9" Oct 02 18:36:13 crc kubenswrapper[4832]: I1002 18:36:13.033057 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c53a2ec9d24bf77ff30209de4a40ff9c108cb4dbd850e026b8247819d33fcd82" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.748304 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4"] Oct 02 18:36:21 crc kubenswrapper[4832]: E1002 18:36:21.749052 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d90d357-6ff7-497d-a6c5-2dbd6af40493" containerName="pull" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.749065 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d90d357-6ff7-497d-a6c5-2dbd6af40493" containerName="pull" Oct 02 18:36:21 crc kubenswrapper[4832]: E1002 18:36:21.749081 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d90d357-6ff7-497d-a6c5-2dbd6af40493" containerName="extract" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.749087 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d90d357-6ff7-497d-a6c5-2dbd6af40493" containerName="extract" Oct 02 18:36:21 crc kubenswrapper[4832]: E1002 18:36:21.749102 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d90d357-6ff7-497d-a6c5-2dbd6af40493" containerName="util" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.749108 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d90d357-6ff7-497d-a6c5-2dbd6af40493" containerName="util" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.749255 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d90d357-6ff7-497d-a6c5-2dbd6af40493" containerName="extract" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.749738 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.754078 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.754126 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.754305 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.754319 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-jtjjq" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.754361 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.763006 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4"] Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.832298 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj866\" (UniqueName: \"kubernetes.io/projected/964a5285-636b-4f3a-ab7d-226ff204c8f2-kube-api-access-cj866\") pod \"metallb-operator-controller-manager-7d88c76f5f-jpcb4\" (UID: \"964a5285-636b-4f3a-ab7d-226ff204c8f2\") " pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.832347 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/964a5285-636b-4f3a-ab7d-226ff204c8f2-apiservice-cert\") pod \"metallb-operator-controller-manager-7d88c76f5f-jpcb4\" (UID: \"964a5285-636b-4f3a-ab7d-226ff204c8f2\") " pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.832415 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/964a5285-636b-4f3a-ab7d-226ff204c8f2-webhook-cert\") pod \"metallb-operator-controller-manager-7d88c76f5f-jpcb4\" (UID: \"964a5285-636b-4f3a-ab7d-226ff204c8f2\") " pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.933714 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj866\" (UniqueName: \"kubernetes.io/projected/964a5285-636b-4f3a-ab7d-226ff204c8f2-kube-api-access-cj866\") pod \"metallb-operator-controller-manager-7d88c76f5f-jpcb4\" (UID: \"964a5285-636b-4f3a-ab7d-226ff204c8f2\") " pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.933769 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/964a5285-636b-4f3a-ab7d-226ff204c8f2-apiservice-cert\") pod \"metallb-operator-controller-manager-7d88c76f5f-jpcb4\" (UID: \"964a5285-636b-4f3a-ab7d-226ff204c8f2\") " pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.933832 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/964a5285-636b-4f3a-ab7d-226ff204c8f2-webhook-cert\") pod \"metallb-operator-controller-manager-7d88c76f5f-jpcb4\" (UID: \"964a5285-636b-4f3a-ab7d-226ff204c8f2\") " pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.939221 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/964a5285-636b-4f3a-ab7d-226ff204c8f2-webhook-cert\") pod \"metallb-operator-controller-manager-7d88c76f5f-jpcb4\" (UID: \"964a5285-636b-4f3a-ab7d-226ff204c8f2\") " pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.939335 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/964a5285-636b-4f3a-ab7d-226ff204c8f2-apiservice-cert\") pod \"metallb-operator-controller-manager-7d88c76f5f-jpcb4\" (UID: \"964a5285-636b-4f3a-ab7d-226ff204c8f2\") " pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" Oct 02 18:36:21 crc kubenswrapper[4832]: I1002 18:36:21.951074 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj866\" (UniqueName: \"kubernetes.io/projected/964a5285-636b-4f3a-ab7d-226ff204c8f2-kube-api-access-cj866\") pod \"metallb-operator-controller-manager-7d88c76f5f-jpcb4\" (UID: \"964a5285-636b-4f3a-ab7d-226ff204c8f2\") " pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.124844 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.192060 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7878588579-8s24k"] Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.193107 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.195770 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-sp8zl" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.195795 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.199569 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.204762 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7878588579-8s24k"] Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.238891 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/94945b05-e6ed-4bb5-8dde-592b66304f50-apiservice-cert\") pod \"metallb-operator-webhook-server-7878588579-8s24k\" (UID: \"94945b05-e6ed-4bb5-8dde-592b66304f50\") " pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.238949 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jq6h\" (UniqueName: \"kubernetes.io/projected/94945b05-e6ed-4bb5-8dde-592b66304f50-kube-api-access-4jq6h\") pod \"metallb-operator-webhook-server-7878588579-8s24k\" (UID: \"94945b05-e6ed-4bb5-8dde-592b66304f50\") " pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.239121 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/94945b05-e6ed-4bb5-8dde-592b66304f50-webhook-cert\") pod \"metallb-operator-webhook-server-7878588579-8s24k\" (UID: \"94945b05-e6ed-4bb5-8dde-592b66304f50\") " pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.340156 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/94945b05-e6ed-4bb5-8dde-592b66304f50-webhook-cert\") pod \"metallb-operator-webhook-server-7878588579-8s24k\" (UID: \"94945b05-e6ed-4bb5-8dde-592b66304f50\") " pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.340508 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/94945b05-e6ed-4bb5-8dde-592b66304f50-apiservice-cert\") pod \"metallb-operator-webhook-server-7878588579-8s24k\" (UID: \"94945b05-e6ed-4bb5-8dde-592b66304f50\") " pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.340534 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jq6h\" (UniqueName: \"kubernetes.io/projected/94945b05-e6ed-4bb5-8dde-592b66304f50-kube-api-access-4jq6h\") pod \"metallb-operator-webhook-server-7878588579-8s24k\" (UID: \"94945b05-e6ed-4bb5-8dde-592b66304f50\") " pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.360384 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/94945b05-e6ed-4bb5-8dde-592b66304f50-apiservice-cert\") pod \"metallb-operator-webhook-server-7878588579-8s24k\" (UID: \"94945b05-e6ed-4bb5-8dde-592b66304f50\") " pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.365844 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/94945b05-e6ed-4bb5-8dde-592b66304f50-webhook-cert\") pod \"metallb-operator-webhook-server-7878588579-8s24k\" (UID: \"94945b05-e6ed-4bb5-8dde-592b66304f50\") " pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.381684 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jq6h\" (UniqueName: \"kubernetes.io/projected/94945b05-e6ed-4bb5-8dde-592b66304f50-kube-api-access-4jq6h\") pod \"metallb-operator-webhook-server-7878588579-8s24k\" (UID: \"94945b05-e6ed-4bb5-8dde-592b66304f50\") " pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.524041 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.669370 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4"] Oct 02 18:36:22 crc kubenswrapper[4832]: I1002 18:36:22.956153 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7878588579-8s24k"] Oct 02 18:36:22 crc kubenswrapper[4832]: W1002 18:36:22.958917 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94945b05_e6ed_4bb5_8dde_592b66304f50.slice/crio-138a7fd862bba8ceea2f56b11da3938f6bdf87a3fabbfbb8b0dffe4b1edf2e47 WatchSource:0}: Error finding container 138a7fd862bba8ceea2f56b11da3938f6bdf87a3fabbfbb8b0dffe4b1edf2e47: Status 404 returned error can't find the container with id 138a7fd862bba8ceea2f56b11da3938f6bdf87a3fabbfbb8b0dffe4b1edf2e47 Oct 02 18:36:23 crc kubenswrapper[4832]: I1002 18:36:23.108411 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" event={"ID":"94945b05-e6ed-4bb5-8dde-592b66304f50","Type":"ContainerStarted","Data":"138a7fd862bba8ceea2f56b11da3938f6bdf87a3fabbfbb8b0dffe4b1edf2e47"} Oct 02 18:36:23 crc kubenswrapper[4832]: I1002 18:36:23.109438 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" event={"ID":"964a5285-636b-4f3a-ab7d-226ff204c8f2","Type":"ContainerStarted","Data":"d6fb01a9a87b161d702d42d344a77771564bcbf72480b2e675f8f8cedaac2b23"} Oct 02 18:36:30 crc kubenswrapper[4832]: I1002 18:36:30.169985 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" event={"ID":"94945b05-e6ed-4bb5-8dde-592b66304f50","Type":"ContainerStarted","Data":"d3400f2beea65427db5566a07c6699a991b9be42fa5cbfcc31d99d2cdc2182fe"} Oct 02 18:36:30 crc kubenswrapper[4832]: I1002 18:36:30.170562 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" Oct 02 18:36:30 crc kubenswrapper[4832]: I1002 18:36:30.171639 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" event={"ID":"964a5285-636b-4f3a-ab7d-226ff204c8f2","Type":"ContainerStarted","Data":"c2c566d4bd7c8c8fb862bb9a7edf397e1055fdbb4b4edc8c67270d0667143f4a"} Oct 02 18:36:30 crc kubenswrapper[4832]: I1002 18:36:30.171801 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" Oct 02 18:36:30 crc kubenswrapper[4832]: I1002 18:36:30.197213 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" podStartSLOduration=1.689093351 podStartE2EDuration="8.197193109s" podCreationTimestamp="2025-10-02 18:36:22 +0000 UTC" firstStartedPulling="2025-10-02 18:36:22.961418405 +0000 UTC m=+939.930861277" lastFinishedPulling="2025-10-02 18:36:29.469518173 +0000 UTC m=+946.438961035" observedRunningTime="2025-10-02 18:36:30.190675164 +0000 UTC m=+947.160118036" watchObservedRunningTime="2025-10-02 18:36:30.197193109 +0000 UTC m=+947.166635981" Oct 02 18:36:30 crc kubenswrapper[4832]: I1002 18:36:30.221186 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" podStartSLOduration=2.470989334 podStartE2EDuration="9.221162432s" podCreationTimestamp="2025-10-02 18:36:21 +0000 UTC" firstStartedPulling="2025-10-02 18:36:22.689993314 +0000 UTC m=+939.659436186" lastFinishedPulling="2025-10-02 18:36:29.440166412 +0000 UTC m=+946.409609284" observedRunningTime="2025-10-02 18:36:30.215770702 +0000 UTC m=+947.185213574" watchObservedRunningTime="2025-10-02 18:36:30.221162432 +0000 UTC m=+947.190605314" Oct 02 18:36:42 crc kubenswrapper[4832]: I1002 18:36:42.531831 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7878588579-8s24k" Oct 02 18:37:02 crc kubenswrapper[4832]: I1002 18:37:02.127816 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7d88c76f5f-jpcb4" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.070398 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-dsxph"] Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.074072 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.076389 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.076722 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8xzs8" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.077348 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.077487 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp"] Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.078688 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.081335 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.085938 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp"] Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.169334 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-99pw7"] Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.170808 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-99pw7" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.172788 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hmw6\" (UniqueName: \"kubernetes.io/projected/d460a40d-97e5-460d-aaf6-927bc8707843-kube-api-access-5hmw6\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.172826 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d460a40d-97e5-460d-aaf6-927bc8707843-metrics-certs\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.172853 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d460a40d-97e5-460d-aaf6-927bc8707843-metrics\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.172875 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d460a40d-97e5-460d-aaf6-927bc8707843-frr-sockets\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.172892 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d460a40d-97e5-460d-aaf6-927bc8707843-reloader\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.172926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d460a40d-97e5-460d-aaf6-927bc8707843-frr-conf\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.172948 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d460a40d-97e5-460d-aaf6-927bc8707843-frr-startup\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.173239 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.173455 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.173606 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.173794 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-q8q8t" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.180311 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-xl8tj"] Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.181713 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-xl8tj" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.184804 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.188053 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-xl8tj"] Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.274368 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hmw6\" (UniqueName: \"kubernetes.io/projected/d460a40d-97e5-460d-aaf6-927bc8707843-kube-api-access-5hmw6\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.274421 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d460a40d-97e5-460d-aaf6-927bc8707843-metrics-certs\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.274454 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d460a40d-97e5-460d-aaf6-927bc8707843-metrics\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.274476 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d460a40d-97e5-460d-aaf6-927bc8707843-frr-sockets\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.274494 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c297584e-08f6-47c0-8acd-35bd207a9394-metallb-excludel2\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.274516 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d460a40d-97e5-460d-aaf6-927bc8707843-reloader\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.274541 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-memberlist\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.274579 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce300e6d-f2b9-47e7-a85e-6a9543a69711-cert\") pod \"frr-k8s-webhook-server-64bf5d555-6zhbp\" (UID: \"ce300e6d-f2b9-47e7-a85e-6a9543a69711\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.274597 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97l7k\" (UniqueName: \"kubernetes.io/projected/ce300e6d-f2b9-47e7-a85e-6a9543a69711-kube-api-access-97l7k\") pod \"frr-k8s-webhook-server-64bf5d555-6zhbp\" (UID: \"ce300e6d-f2b9-47e7-a85e-6a9543a69711\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.274615 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d460a40d-97e5-460d-aaf6-927bc8707843-frr-conf\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.274633 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d460a40d-97e5-460d-aaf6-927bc8707843-frr-startup\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.274651 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97vs5\" (UniqueName: \"kubernetes.io/projected/c297584e-08f6-47c0-8acd-35bd207a9394-kube-api-access-97vs5\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.274716 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-metrics-certs\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.275858 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d460a40d-97e5-460d-aaf6-927bc8707843-metrics\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.276117 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d460a40d-97e5-460d-aaf6-927bc8707843-frr-sockets\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.276393 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d460a40d-97e5-460d-aaf6-927bc8707843-frr-conf\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.277126 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d460a40d-97e5-460d-aaf6-927bc8707843-frr-startup\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.279455 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d460a40d-97e5-460d-aaf6-927bc8707843-reloader\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.282205 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d460a40d-97e5-460d-aaf6-927bc8707843-metrics-certs\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.294904 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hmw6\" (UniqueName: \"kubernetes.io/projected/d460a40d-97e5-460d-aaf6-927bc8707843-kube-api-access-5hmw6\") pod \"frr-k8s-dsxph\" (UID: \"d460a40d-97e5-460d-aaf6-927bc8707843\") " pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.375891 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-metrics-certs\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.375941 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkhbk\" (UniqueName: \"kubernetes.io/projected/80714958-8954-4014-97af-c480df6a6981-kube-api-access-hkhbk\") pod \"controller-68d546b9d8-xl8tj\" (UID: \"80714958-8954-4014-97af-c480df6a6981\") " pod="metallb-system/controller-68d546b9d8-xl8tj" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.375984 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80714958-8954-4014-97af-c480df6a6981-metrics-certs\") pod \"controller-68d546b9d8-xl8tj\" (UID: \"80714958-8954-4014-97af-c480df6a6981\") " pod="metallb-system/controller-68d546b9d8-xl8tj" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.376028 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c297584e-08f6-47c0-8acd-35bd207a9394-metallb-excludel2\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.376054 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-memberlist\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:03 crc kubenswrapper[4832]: E1002 18:37:03.376063 4832 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Oct 02 18:37:03 crc kubenswrapper[4832]: E1002 18:37:03.376191 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-metrics-certs podName:c297584e-08f6-47c0-8acd-35bd207a9394 nodeName:}" failed. No retries permitted until 2025-10-02 18:37:03.87616864 +0000 UTC m=+980.845611512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-metrics-certs") pod "speaker-99pw7" (UID: "c297584e-08f6-47c0-8acd-35bd207a9394") : secret "speaker-certs-secret" not found Oct 02 18:37:03 crc kubenswrapper[4832]: E1002 18:37:03.376376 4832 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 02 18:37:03 crc kubenswrapper[4832]: E1002 18:37:03.376483 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-memberlist podName:c297584e-08f6-47c0-8acd-35bd207a9394 nodeName:}" failed. No retries permitted until 2025-10-02 18:37:03.876456468 +0000 UTC m=+980.845899440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-memberlist") pod "speaker-99pw7" (UID: "c297584e-08f6-47c0-8acd-35bd207a9394") : secret "metallb-memberlist" not found Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.376706 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce300e6d-f2b9-47e7-a85e-6a9543a69711-cert\") pod \"frr-k8s-webhook-server-64bf5d555-6zhbp\" (UID: \"ce300e6d-f2b9-47e7-a85e-6a9543a69711\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.376780 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97l7k\" (UniqueName: \"kubernetes.io/projected/ce300e6d-f2b9-47e7-a85e-6a9543a69711-kube-api-access-97l7k\") pod \"frr-k8s-webhook-server-64bf5d555-6zhbp\" (UID: \"ce300e6d-f2b9-47e7-a85e-6a9543a69711\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.376957 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c297584e-08f6-47c0-8acd-35bd207a9394-metallb-excludel2\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.376992 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80714958-8954-4014-97af-c480df6a6981-cert\") pod \"controller-68d546b9d8-xl8tj\" (UID: \"80714958-8954-4014-97af-c480df6a6981\") " pod="metallb-system/controller-68d546b9d8-xl8tj" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.377106 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97vs5\" (UniqueName: \"kubernetes.io/projected/c297584e-08f6-47c0-8acd-35bd207a9394-kube-api-access-97vs5\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.393967 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce300e6d-f2b9-47e7-a85e-6a9543a69711-cert\") pod \"frr-k8s-webhook-server-64bf5d555-6zhbp\" (UID: \"ce300e6d-f2b9-47e7-a85e-6a9543a69711\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.397466 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97vs5\" (UniqueName: \"kubernetes.io/projected/c297584e-08f6-47c0-8acd-35bd207a9394-kube-api-access-97vs5\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.397700 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97l7k\" (UniqueName: \"kubernetes.io/projected/ce300e6d-f2b9-47e7-a85e-6a9543a69711-kube-api-access-97l7k\") pod \"frr-k8s-webhook-server-64bf5d555-6zhbp\" (UID: \"ce300e6d-f2b9-47e7-a85e-6a9543a69711\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.398143 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.413381 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.478946 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkhbk\" (UniqueName: \"kubernetes.io/projected/80714958-8954-4014-97af-c480df6a6981-kube-api-access-hkhbk\") pod \"controller-68d546b9d8-xl8tj\" (UID: \"80714958-8954-4014-97af-c480df6a6981\") " pod="metallb-system/controller-68d546b9d8-xl8tj" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.479084 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80714958-8954-4014-97af-c480df6a6981-metrics-certs\") pod \"controller-68d546b9d8-xl8tj\" (UID: \"80714958-8954-4014-97af-c480df6a6981\") " pod="metallb-system/controller-68d546b9d8-xl8tj" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.479161 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80714958-8954-4014-97af-c480df6a6981-cert\") pod \"controller-68d546b9d8-xl8tj\" (UID: \"80714958-8954-4014-97af-c480df6a6981\") " pod="metallb-system/controller-68d546b9d8-xl8tj" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.480827 4832 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.484731 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80714958-8954-4014-97af-c480df6a6981-metrics-certs\") pod \"controller-68d546b9d8-xl8tj\" (UID: \"80714958-8954-4014-97af-c480df6a6981\") " pod="metallb-system/controller-68d546b9d8-xl8tj" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.492084 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80714958-8954-4014-97af-c480df6a6981-cert\") pod \"controller-68d546b9d8-xl8tj\" (UID: \"80714958-8954-4014-97af-c480df6a6981\") " pod="metallb-system/controller-68d546b9d8-xl8tj" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.494867 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkhbk\" (UniqueName: \"kubernetes.io/projected/80714958-8954-4014-97af-c480df6a6981-kube-api-access-hkhbk\") pod \"controller-68d546b9d8-xl8tj\" (UID: \"80714958-8954-4014-97af-c480df6a6981\") " pod="metallb-system/controller-68d546b9d8-xl8tj" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.519105 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-xl8tj" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.821198 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-xl8tj"] Oct 02 18:37:03 crc kubenswrapper[4832]: W1002 18:37:03.823609 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80714958_8954_4014_97af_c480df6a6981.slice/crio-f1142608f74e4fe37399354fba23c158601332a8de49bf6292aa2b9437e243ba WatchSource:0}: Error finding container f1142608f74e4fe37399354fba23c158601332a8de49bf6292aa2b9437e243ba: Status 404 returned error can't find the container with id f1142608f74e4fe37399354fba23c158601332a8de49bf6292aa2b9437e243ba Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.859282 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp"] Oct 02 18:37:03 crc kubenswrapper[4832]: W1002 18:37:03.883345 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce300e6d_f2b9_47e7_a85e_6a9543a69711.slice/crio-ad03cd39e4fbd0f72e49233677408ccee69bb8fe32fb403ffd2822b6bdeb5284 WatchSource:0}: Error finding container ad03cd39e4fbd0f72e49233677408ccee69bb8fe32fb403ffd2822b6bdeb5284: Status 404 returned error can't find the container with id ad03cd39e4fbd0f72e49233677408ccee69bb8fe32fb403ffd2822b6bdeb5284 Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.891789 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-metrics-certs\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.891879 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-memberlist\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:03 crc kubenswrapper[4832]: E1002 18:37:03.892029 4832 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 02 18:37:03 crc kubenswrapper[4832]: E1002 18:37:03.892085 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-memberlist podName:c297584e-08f6-47c0-8acd-35bd207a9394 nodeName:}" failed. No retries permitted until 2025-10-02 18:37:04.892070357 +0000 UTC m=+981.861513229 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-memberlist") pod "speaker-99pw7" (UID: "c297584e-08f6-47c0-8acd-35bd207a9394") : secret "metallb-memberlist" not found Oct 02 18:37:03 crc kubenswrapper[4832]: I1002 18:37:03.896879 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-metrics-certs\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:04 crc kubenswrapper[4832]: I1002 18:37:04.454315 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-xl8tj" event={"ID":"80714958-8954-4014-97af-c480df6a6981","Type":"ContainerStarted","Data":"df0e8dd9e6cf3035d3cfe7dbce424745ae36b73c876052ecf463b73ae088fa77"} Oct 02 18:37:04 crc kubenswrapper[4832]: I1002 18:37:04.454388 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-xl8tj" event={"ID":"80714958-8954-4014-97af-c480df6a6981","Type":"ContainerStarted","Data":"05da0735a56dbe152c171f7bec22aaa0e4bf751cbaf0839986d10a618324a35d"} Oct 02 18:37:04 crc kubenswrapper[4832]: I1002 18:37:04.454413 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-xl8tj" event={"ID":"80714958-8954-4014-97af-c480df6a6981","Type":"ContainerStarted","Data":"f1142608f74e4fe37399354fba23c158601332a8de49bf6292aa2b9437e243ba"} Oct 02 18:37:04 crc kubenswrapper[4832]: I1002 18:37:04.454443 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-xl8tj" Oct 02 18:37:04 crc kubenswrapper[4832]: I1002 18:37:04.456595 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp" event={"ID":"ce300e6d-f2b9-47e7-a85e-6a9543a69711","Type":"ContainerStarted","Data":"ad03cd39e4fbd0f72e49233677408ccee69bb8fe32fb403ffd2822b6bdeb5284"} Oct 02 18:37:04 crc kubenswrapper[4832]: I1002 18:37:04.459853 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsxph" event={"ID":"d460a40d-97e5-460d-aaf6-927bc8707843","Type":"ContainerStarted","Data":"45979f068aed40b121f24ac07c4fd1a7c9b84afe4559865c715239e60ccb99a8"} Oct 02 18:37:04 crc kubenswrapper[4832]: I1002 18:37:04.492367 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-xl8tj" podStartSLOduration=1.492342503 podStartE2EDuration="1.492342503s" podCreationTimestamp="2025-10-02 18:37:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:37:04.484585099 +0000 UTC m=+981.454028001" watchObservedRunningTime="2025-10-02 18:37:04.492342503 +0000 UTC m=+981.461785415" Oct 02 18:37:04 crc kubenswrapper[4832]: I1002 18:37:04.909822 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-memberlist\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:04 crc kubenswrapper[4832]: I1002 18:37:04.916895 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c297584e-08f6-47c0-8acd-35bd207a9394-memberlist\") pod \"speaker-99pw7\" (UID: \"c297584e-08f6-47c0-8acd-35bd207a9394\") " pod="metallb-system/speaker-99pw7" Oct 02 18:37:05 crc kubenswrapper[4832]: I1002 18:37:05.006645 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-99pw7" Oct 02 18:37:05 crc kubenswrapper[4832]: I1002 18:37:05.496351 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-99pw7" event={"ID":"c297584e-08f6-47c0-8acd-35bd207a9394","Type":"ContainerStarted","Data":"26b7c4d3231be7a6b05f7fae1bae926f3428fea20402f611ca341414f10dc24d"} Oct 02 18:37:05 crc kubenswrapper[4832]: I1002 18:37:05.496847 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-99pw7" event={"ID":"c297584e-08f6-47c0-8acd-35bd207a9394","Type":"ContainerStarted","Data":"7a9a5b422f0d1285bd9e49862313884e2b4c4199d03288e6d166a912a7c3711b"} Oct 02 18:37:06 crc kubenswrapper[4832]: I1002 18:37:06.509161 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-99pw7" event={"ID":"c297584e-08f6-47c0-8acd-35bd207a9394","Type":"ContainerStarted","Data":"572f18753e71fe07d0efef967fc36df2e9907c092a22d2e0c7fddc5c6a71f218"} Oct 02 18:37:06 crc kubenswrapper[4832]: I1002 18:37:06.509472 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-99pw7" Oct 02 18:37:12 crc kubenswrapper[4832]: I1002 18:37:12.561189 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp" event={"ID":"ce300e6d-f2b9-47e7-a85e-6a9543a69711","Type":"ContainerStarted","Data":"19ec08e49347a49a2da6c489a696c2b6b52b0a173752f096c7ffa65e1db00704"} Oct 02 18:37:12 crc kubenswrapper[4832]: I1002 18:37:12.562527 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp" Oct 02 18:37:12 crc kubenswrapper[4832]: I1002 18:37:12.563766 4832 generic.go:334] "Generic (PLEG): container finished" podID="d460a40d-97e5-460d-aaf6-927bc8707843" containerID="bee39169a64091fe08e3655e63897b113969c38f8f9296ea5c17862838126103" exitCode=0 Oct 02 18:37:12 crc kubenswrapper[4832]: I1002 18:37:12.563810 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsxph" event={"ID":"d460a40d-97e5-460d-aaf6-927bc8707843","Type":"ContainerDied","Data":"bee39169a64091fe08e3655e63897b113969c38f8f9296ea5c17862838126103"} Oct 02 18:37:12 crc kubenswrapper[4832]: I1002 18:37:12.583887 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp" podStartSLOduration=1.280260605 podStartE2EDuration="9.583860362s" podCreationTimestamp="2025-10-02 18:37:03 +0000 UTC" firstStartedPulling="2025-10-02 18:37:03.886202813 +0000 UTC m=+980.855645685" lastFinishedPulling="2025-10-02 18:37:12.18980257 +0000 UTC m=+989.159245442" observedRunningTime="2025-10-02 18:37:12.579327331 +0000 UTC m=+989.548770243" watchObservedRunningTime="2025-10-02 18:37:12.583860362 +0000 UTC m=+989.553303244" Oct 02 18:37:12 crc kubenswrapper[4832]: I1002 18:37:12.585025 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-99pw7" podStartSLOduration=9.585015199 podStartE2EDuration="9.585015199s" podCreationTimestamp="2025-10-02 18:37:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:37:06.531032139 +0000 UTC m=+983.500475021" watchObservedRunningTime="2025-10-02 18:37:12.585015199 +0000 UTC m=+989.554458081" Oct 02 18:37:13 crc kubenswrapper[4832]: I1002 18:37:13.528143 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-xl8tj" Oct 02 18:37:13 crc kubenswrapper[4832]: I1002 18:37:13.577201 4832 generic.go:334] "Generic (PLEG): container finished" podID="d460a40d-97e5-460d-aaf6-927bc8707843" containerID="a2256c1b9f3e8d49f7de4ba0e327602cdd0c141d465ebd499be60ecf7b40838c" exitCode=0 Oct 02 18:37:13 crc kubenswrapper[4832]: I1002 18:37:13.577253 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsxph" event={"ID":"d460a40d-97e5-460d-aaf6-927bc8707843","Type":"ContainerDied","Data":"a2256c1b9f3e8d49f7de4ba0e327602cdd0c141d465ebd499be60ecf7b40838c"} Oct 02 18:37:14 crc kubenswrapper[4832]: I1002 18:37:14.588089 4832 generic.go:334] "Generic (PLEG): container finished" podID="d460a40d-97e5-460d-aaf6-927bc8707843" containerID="33eafe02f684857e98468407ccc60de399f49398bef7f1dfb9200483e930773e" exitCode=0 Oct 02 18:37:14 crc kubenswrapper[4832]: I1002 18:37:14.588156 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsxph" event={"ID":"d460a40d-97e5-460d-aaf6-927bc8707843","Type":"ContainerDied","Data":"33eafe02f684857e98468407ccc60de399f49398bef7f1dfb9200483e930773e"} Oct 02 18:37:15 crc kubenswrapper[4832]: I1002 18:37:15.011603 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-99pw7" Oct 02 18:37:15 crc kubenswrapper[4832]: I1002 18:37:15.605665 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsxph" event={"ID":"d460a40d-97e5-460d-aaf6-927bc8707843","Type":"ContainerStarted","Data":"c4e055e53ad2a17fd4cdb12816f44b68ec876aa55129695bd1636fbc2b63ac18"} Oct 02 18:37:15 crc kubenswrapper[4832]: I1002 18:37:15.606059 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsxph" event={"ID":"d460a40d-97e5-460d-aaf6-927bc8707843","Type":"ContainerStarted","Data":"1360544d637734acb10a2d9cb9f063b7608ce0afd19c8572ff98749e78df9431"} Oct 02 18:37:15 crc kubenswrapper[4832]: I1002 18:37:15.606081 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsxph" event={"ID":"d460a40d-97e5-460d-aaf6-927bc8707843","Type":"ContainerStarted","Data":"417130ef1cce646f408b55169a5d60d0e1e911a86c2afc79b16d2354d398568f"} Oct 02 18:37:15 crc kubenswrapper[4832]: I1002 18:37:15.606098 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsxph" event={"ID":"d460a40d-97e5-460d-aaf6-927bc8707843","Type":"ContainerStarted","Data":"a937e261e5cdb824d926668f14ef627b326cce20beb0b7056a705d3d6796d9b5"} Oct 02 18:37:16 crc kubenswrapper[4832]: I1002 18:37:16.622534 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsxph" event={"ID":"d460a40d-97e5-460d-aaf6-927bc8707843","Type":"ContainerStarted","Data":"30d6132dffc53e571a545887515eae74236164a533ff437111ba610bad9b229d"} Oct 02 18:37:16 crc kubenswrapper[4832]: I1002 18:37:16.623490 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:16 crc kubenswrapper[4832]: I1002 18:37:16.623588 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dsxph" event={"ID":"d460a40d-97e5-460d-aaf6-927bc8707843","Type":"ContainerStarted","Data":"9591abaf77cd69b7bb6c0c4339d66d3262d97a8c09ca371c0e7d1ab30e63e0ed"} Oct 02 18:37:16 crc kubenswrapper[4832]: I1002 18:37:16.695013 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-dsxph" podStartSLOduration=5.227594263 podStartE2EDuration="13.694990804s" podCreationTimestamp="2025-10-02 18:37:03 +0000 UTC" firstStartedPulling="2025-10-02 18:37:03.714476661 +0000 UTC m=+980.683919533" lastFinishedPulling="2025-10-02 18:37:12.181873162 +0000 UTC m=+989.151316074" observedRunningTime="2025-10-02 18:37:16.683826734 +0000 UTC m=+993.653269606" watchObservedRunningTime="2025-10-02 18:37:16.694990804 +0000 UTC m=+993.664433676" Oct 02 18:37:18 crc kubenswrapper[4832]: I1002 18:37:18.399189 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:18 crc kubenswrapper[4832]: I1002 18:37:18.452945 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:21 crc kubenswrapper[4832]: I1002 18:37:21.663458 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rqvl5"] Oct 02 18:37:21 crc kubenswrapper[4832]: I1002 18:37:21.665364 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rqvl5" Oct 02 18:37:21 crc kubenswrapper[4832]: I1002 18:37:21.668570 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 02 18:37:21 crc kubenswrapper[4832]: I1002 18:37:21.668708 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 02 18:37:21 crc kubenswrapper[4832]: I1002 18:37:21.668811 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rmtgt" Oct 02 18:37:21 crc kubenswrapper[4832]: I1002 18:37:21.676445 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rqvl5"] Oct 02 18:37:21 crc kubenswrapper[4832]: I1002 18:37:21.715633 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rvs4\" (UniqueName: \"kubernetes.io/projected/791e5e7f-81c9-4e84-baa4-d1f1f752ed7b-kube-api-access-4rvs4\") pod \"openstack-operator-index-rqvl5\" (UID: \"791e5e7f-81c9-4e84-baa4-d1f1f752ed7b\") " pod="openstack-operators/openstack-operator-index-rqvl5" Oct 02 18:37:21 crc kubenswrapper[4832]: I1002 18:37:21.817514 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rvs4\" (UniqueName: \"kubernetes.io/projected/791e5e7f-81c9-4e84-baa4-d1f1f752ed7b-kube-api-access-4rvs4\") pod \"openstack-operator-index-rqvl5\" (UID: \"791e5e7f-81c9-4e84-baa4-d1f1f752ed7b\") " pod="openstack-operators/openstack-operator-index-rqvl5" Oct 02 18:37:21 crc kubenswrapper[4832]: I1002 18:37:21.845584 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rvs4\" (UniqueName: \"kubernetes.io/projected/791e5e7f-81c9-4e84-baa4-d1f1f752ed7b-kube-api-access-4rvs4\") pod \"openstack-operator-index-rqvl5\" (UID: \"791e5e7f-81c9-4e84-baa4-d1f1f752ed7b\") " pod="openstack-operators/openstack-operator-index-rqvl5" Oct 02 18:37:21 crc kubenswrapper[4832]: I1002 18:37:21.987884 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rqvl5" Oct 02 18:37:22 crc kubenswrapper[4832]: I1002 18:37:22.473756 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rqvl5"] Oct 02 18:37:22 crc kubenswrapper[4832]: W1002 18:37:22.474701 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod791e5e7f_81c9_4e84_baa4_d1f1f752ed7b.slice/crio-fa0f849e3b3c40f74ea9b8dc167729c0a85ae2e029e1c52a68477ce843bd4a74 WatchSource:0}: Error finding container fa0f849e3b3c40f74ea9b8dc167729c0a85ae2e029e1c52a68477ce843bd4a74: Status 404 returned error can't find the container with id fa0f849e3b3c40f74ea9b8dc167729c0a85ae2e029e1c52a68477ce843bd4a74 Oct 02 18:37:22 crc kubenswrapper[4832]: I1002 18:37:22.671485 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rqvl5" event={"ID":"791e5e7f-81c9-4e84-baa4-d1f1f752ed7b","Type":"ContainerStarted","Data":"fa0f849e3b3c40f74ea9b8dc167729c0a85ae2e029e1c52a68477ce843bd4a74"} Oct 02 18:37:23 crc kubenswrapper[4832]: I1002 18:37:23.421184 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-6zhbp" Oct 02 18:37:25 crc kubenswrapper[4832]: I1002 18:37:25.697148 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rqvl5" event={"ID":"791e5e7f-81c9-4e84-baa4-d1f1f752ed7b","Type":"ContainerStarted","Data":"150fc806b8c4516715b3e4ed3c9edff319b713504c8398ef1000b953f940bcbc"} Oct 02 18:37:25 crc kubenswrapper[4832]: I1002 18:37:25.714711 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rqvl5" podStartSLOduration=1.707278225 podStartE2EDuration="4.714687265s" podCreationTimestamp="2025-10-02 18:37:21 +0000 UTC" firstStartedPulling="2025-10-02 18:37:22.478346757 +0000 UTC m=+999.447789629" lastFinishedPulling="2025-10-02 18:37:25.485755797 +0000 UTC m=+1002.455198669" observedRunningTime="2025-10-02 18:37:25.71230246 +0000 UTC m=+1002.681745352" watchObservedRunningTime="2025-10-02 18:37:25.714687265 +0000 UTC m=+1002.684130147" Oct 02 18:37:26 crc kubenswrapper[4832]: I1002 18:37:26.875499 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:37:26 crc kubenswrapper[4832]: I1002 18:37:26.875731 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:37:31 crc kubenswrapper[4832]: I1002 18:37:31.988047 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-rqvl5" Oct 02 18:37:31 crc kubenswrapper[4832]: I1002 18:37:31.988754 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-rqvl5" Oct 02 18:37:32 crc kubenswrapper[4832]: I1002 18:37:32.036527 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-rqvl5" Oct 02 18:37:32 crc kubenswrapper[4832]: I1002 18:37:32.799870 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-rqvl5" Oct 02 18:37:33 crc kubenswrapper[4832]: I1002 18:37:33.406147 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-dsxph" Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.129798 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77"] Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.132572 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.137107 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-s5dnr" Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.152319 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77"] Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.260188 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tnkk\" (UniqueName: \"kubernetes.io/projected/cdf31730-cdc9-4eca-980c-2189c50917b2-kube-api-access-8tnkk\") pod \"393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77\" (UID: \"cdf31730-cdc9-4eca-980c-2189c50917b2\") " pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.261469 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdf31730-cdc9-4eca-980c-2189c50917b2-util\") pod \"393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77\" (UID: \"cdf31730-cdc9-4eca-980c-2189c50917b2\") " pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.261549 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdf31730-cdc9-4eca-980c-2189c50917b2-bundle\") pod \"393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77\" (UID: \"cdf31730-cdc9-4eca-980c-2189c50917b2\") " pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.362751 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tnkk\" (UniqueName: \"kubernetes.io/projected/cdf31730-cdc9-4eca-980c-2189c50917b2-kube-api-access-8tnkk\") pod \"393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77\" (UID: \"cdf31730-cdc9-4eca-980c-2189c50917b2\") " pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.362865 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdf31730-cdc9-4eca-980c-2189c50917b2-util\") pod \"393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77\" (UID: \"cdf31730-cdc9-4eca-980c-2189c50917b2\") " pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.362902 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdf31730-cdc9-4eca-980c-2189c50917b2-bundle\") pod \"393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77\" (UID: \"cdf31730-cdc9-4eca-980c-2189c50917b2\") " pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.363557 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdf31730-cdc9-4eca-980c-2189c50917b2-bundle\") pod \"393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77\" (UID: \"cdf31730-cdc9-4eca-980c-2189c50917b2\") " pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.363824 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdf31730-cdc9-4eca-980c-2189c50917b2-util\") pod \"393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77\" (UID: \"cdf31730-cdc9-4eca-980c-2189c50917b2\") " pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.390627 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tnkk\" (UniqueName: \"kubernetes.io/projected/cdf31730-cdc9-4eca-980c-2189c50917b2-kube-api-access-8tnkk\") pod \"393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77\" (UID: \"cdf31730-cdc9-4eca-980c-2189c50917b2\") " pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.462616 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" Oct 02 18:37:35 crc kubenswrapper[4832]: I1002 18:37:35.934529 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77"] Oct 02 18:37:35 crc kubenswrapper[4832]: W1002 18:37:35.941530 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdf31730_cdc9_4eca_980c_2189c50917b2.slice/crio-ba8161394fdefd0f2d9ff24c612c8534a4349263da4042628ce23a8617c18c66 WatchSource:0}: Error finding container ba8161394fdefd0f2d9ff24c612c8534a4349263da4042628ce23a8617c18c66: Status 404 returned error can't find the container with id ba8161394fdefd0f2d9ff24c612c8534a4349263da4042628ce23a8617c18c66 Oct 02 18:37:36 crc kubenswrapper[4832]: I1002 18:37:36.805971 4832 generic.go:334] "Generic (PLEG): container finished" podID="cdf31730-cdc9-4eca-980c-2189c50917b2" containerID="def086ca0c086ec97dadddda9d0e2385173e687e8b5c9a5ede228028845b41c9" exitCode=0 Oct 02 18:37:36 crc kubenswrapper[4832]: I1002 18:37:36.806058 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" event={"ID":"cdf31730-cdc9-4eca-980c-2189c50917b2","Type":"ContainerDied","Data":"def086ca0c086ec97dadddda9d0e2385173e687e8b5c9a5ede228028845b41c9"} Oct 02 18:37:36 crc kubenswrapper[4832]: I1002 18:37:36.806634 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" event={"ID":"cdf31730-cdc9-4eca-980c-2189c50917b2","Type":"ContainerStarted","Data":"ba8161394fdefd0f2d9ff24c612c8534a4349263da4042628ce23a8617c18c66"} Oct 02 18:37:37 crc kubenswrapper[4832]: I1002 18:37:37.818817 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" event={"ID":"cdf31730-cdc9-4eca-980c-2189c50917b2","Type":"ContainerStarted","Data":"86b9184067eccdee82fc2063b3626771800fdf0819a19b218b8a0c74689a90ce"} Oct 02 18:37:38 crc kubenswrapper[4832]: I1002 18:37:38.832395 4832 generic.go:334] "Generic (PLEG): container finished" podID="cdf31730-cdc9-4eca-980c-2189c50917b2" containerID="86b9184067eccdee82fc2063b3626771800fdf0819a19b218b8a0c74689a90ce" exitCode=0 Oct 02 18:37:38 crc kubenswrapper[4832]: I1002 18:37:38.832454 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" event={"ID":"cdf31730-cdc9-4eca-980c-2189c50917b2","Type":"ContainerDied","Data":"86b9184067eccdee82fc2063b3626771800fdf0819a19b218b8a0c74689a90ce"} Oct 02 18:37:39 crc kubenswrapper[4832]: I1002 18:37:39.845636 4832 generic.go:334] "Generic (PLEG): container finished" podID="cdf31730-cdc9-4eca-980c-2189c50917b2" containerID="a4300c0bdc3a0167a7a7a3329f8ce03cae562a6a4437b4ace800e37f35610af7" exitCode=0 Oct 02 18:37:39 crc kubenswrapper[4832]: I1002 18:37:39.845730 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" event={"ID":"cdf31730-cdc9-4eca-980c-2189c50917b2","Type":"ContainerDied","Data":"a4300c0bdc3a0167a7a7a3329f8ce03cae562a6a4437b4ace800e37f35610af7"} Oct 02 18:37:41 crc kubenswrapper[4832]: I1002 18:37:41.279289 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" Oct 02 18:37:41 crc kubenswrapper[4832]: I1002 18:37:41.472933 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tnkk\" (UniqueName: \"kubernetes.io/projected/cdf31730-cdc9-4eca-980c-2189c50917b2-kube-api-access-8tnkk\") pod \"cdf31730-cdc9-4eca-980c-2189c50917b2\" (UID: \"cdf31730-cdc9-4eca-980c-2189c50917b2\") " Oct 02 18:37:41 crc kubenswrapper[4832]: I1002 18:37:41.473371 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdf31730-cdc9-4eca-980c-2189c50917b2-util\") pod \"cdf31730-cdc9-4eca-980c-2189c50917b2\" (UID: \"cdf31730-cdc9-4eca-980c-2189c50917b2\") " Oct 02 18:37:41 crc kubenswrapper[4832]: I1002 18:37:41.473432 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdf31730-cdc9-4eca-980c-2189c50917b2-bundle\") pod \"cdf31730-cdc9-4eca-980c-2189c50917b2\" (UID: \"cdf31730-cdc9-4eca-980c-2189c50917b2\") " Oct 02 18:37:41 crc kubenswrapper[4832]: I1002 18:37:41.474236 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdf31730-cdc9-4eca-980c-2189c50917b2-bundle" (OuterVolumeSpecName: "bundle") pod "cdf31730-cdc9-4eca-980c-2189c50917b2" (UID: "cdf31730-cdc9-4eca-980c-2189c50917b2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:37:41 crc kubenswrapper[4832]: I1002 18:37:41.480701 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdf31730-cdc9-4eca-980c-2189c50917b2-kube-api-access-8tnkk" (OuterVolumeSpecName: "kube-api-access-8tnkk") pod "cdf31730-cdc9-4eca-980c-2189c50917b2" (UID: "cdf31730-cdc9-4eca-980c-2189c50917b2"). InnerVolumeSpecName "kube-api-access-8tnkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:37:41 crc kubenswrapper[4832]: I1002 18:37:41.575289 4832 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdf31730-cdc9-4eca-980c-2189c50917b2-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:37:41 crc kubenswrapper[4832]: I1002 18:37:41.575333 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tnkk\" (UniqueName: \"kubernetes.io/projected/cdf31730-cdc9-4eca-980c-2189c50917b2-kube-api-access-8tnkk\") on node \"crc\" DevicePath \"\"" Oct 02 18:37:41 crc kubenswrapper[4832]: I1002 18:37:41.610946 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdf31730-cdc9-4eca-980c-2189c50917b2-util" (OuterVolumeSpecName: "util") pod "cdf31730-cdc9-4eca-980c-2189c50917b2" (UID: "cdf31730-cdc9-4eca-980c-2189c50917b2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:37:41 crc kubenswrapper[4832]: I1002 18:37:41.675989 4832 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdf31730-cdc9-4eca-980c-2189c50917b2-util\") on node \"crc\" DevicePath \"\"" Oct 02 18:37:41 crc kubenswrapper[4832]: I1002 18:37:41.866434 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" event={"ID":"cdf31730-cdc9-4eca-980c-2189c50917b2","Type":"ContainerDied","Data":"ba8161394fdefd0f2d9ff24c612c8534a4349263da4042628ce23a8617c18c66"} Oct 02 18:37:41 crc kubenswrapper[4832]: I1002 18:37:41.866489 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba8161394fdefd0f2d9ff24c612c8534a4349263da4042628ce23a8617c18c66" Oct 02 18:37:41 crc kubenswrapper[4832]: I1002 18:37:41.866990 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77" Oct 02 18:37:56 crc kubenswrapper[4832]: I1002 18:37:56.875573 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:37:56 crc kubenswrapper[4832]: I1002 18:37:56.876107 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:37:59 crc kubenswrapper[4832]: I1002 18:37:59.359574 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7c58d4ffff-bz6pp"] Oct 02 18:37:59 crc kubenswrapper[4832]: E1002 18:37:59.360450 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdf31730-cdc9-4eca-980c-2189c50917b2" containerName="extract" Oct 02 18:37:59 crc kubenswrapper[4832]: I1002 18:37:59.360467 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdf31730-cdc9-4eca-980c-2189c50917b2" containerName="extract" Oct 02 18:37:59 crc kubenswrapper[4832]: E1002 18:37:59.360490 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdf31730-cdc9-4eca-980c-2189c50917b2" containerName="pull" Oct 02 18:37:59 crc kubenswrapper[4832]: I1002 18:37:59.360496 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdf31730-cdc9-4eca-980c-2189c50917b2" containerName="pull" Oct 02 18:37:59 crc kubenswrapper[4832]: E1002 18:37:59.360518 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdf31730-cdc9-4eca-980c-2189c50917b2" containerName="util" Oct 02 18:37:59 crc kubenswrapper[4832]: I1002 18:37:59.360524 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdf31730-cdc9-4eca-980c-2189c50917b2" containerName="util" Oct 02 18:37:59 crc kubenswrapper[4832]: I1002 18:37:59.360666 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdf31730-cdc9-4eca-980c-2189c50917b2" containerName="extract" Oct 02 18:37:59 crc kubenswrapper[4832]: I1002 18:37:59.361585 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7c58d4ffff-bz6pp" Oct 02 18:37:59 crc kubenswrapper[4832]: I1002 18:37:59.372543 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-xcb7z" Oct 02 18:37:59 crc kubenswrapper[4832]: I1002 18:37:59.392135 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7c58d4ffff-bz6pp"] Oct 02 18:37:59 crc kubenswrapper[4832]: I1002 18:37:59.502153 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppwtl\" (UniqueName: \"kubernetes.io/projected/37a13ba7-6567-4720-9a8d-ce1c3420bfb2-kube-api-access-ppwtl\") pod \"openstack-operator-controller-operator-7c58d4ffff-bz6pp\" (UID: \"37a13ba7-6567-4720-9a8d-ce1c3420bfb2\") " pod="openstack-operators/openstack-operator-controller-operator-7c58d4ffff-bz6pp" Oct 02 18:37:59 crc kubenswrapper[4832]: I1002 18:37:59.604240 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppwtl\" (UniqueName: \"kubernetes.io/projected/37a13ba7-6567-4720-9a8d-ce1c3420bfb2-kube-api-access-ppwtl\") pod \"openstack-operator-controller-operator-7c58d4ffff-bz6pp\" (UID: \"37a13ba7-6567-4720-9a8d-ce1c3420bfb2\") " pod="openstack-operators/openstack-operator-controller-operator-7c58d4ffff-bz6pp" Oct 02 18:37:59 crc kubenswrapper[4832]: I1002 18:37:59.628115 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppwtl\" (UniqueName: \"kubernetes.io/projected/37a13ba7-6567-4720-9a8d-ce1c3420bfb2-kube-api-access-ppwtl\") pod \"openstack-operator-controller-operator-7c58d4ffff-bz6pp\" (UID: \"37a13ba7-6567-4720-9a8d-ce1c3420bfb2\") " pod="openstack-operators/openstack-operator-controller-operator-7c58d4ffff-bz6pp" Oct 02 18:37:59 crc kubenswrapper[4832]: I1002 18:37:59.682304 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7c58d4ffff-bz6pp" Oct 02 18:38:00 crc kubenswrapper[4832]: I1002 18:38:00.187053 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7c58d4ffff-bz6pp"] Oct 02 18:38:01 crc kubenswrapper[4832]: I1002 18:38:01.037054 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7c58d4ffff-bz6pp" event={"ID":"37a13ba7-6567-4720-9a8d-ce1c3420bfb2","Type":"ContainerStarted","Data":"537aa2a2b825ec32199fca97149a008125cf86164a1c13b9d36f918d4056fcef"} Oct 02 18:38:05 crc kubenswrapper[4832]: I1002 18:38:05.081146 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7c58d4ffff-bz6pp" event={"ID":"37a13ba7-6567-4720-9a8d-ce1c3420bfb2","Type":"ContainerStarted","Data":"0ce6927fd353de478091b624dd75e1f91d89662372cb627790344e9138602c38"} Oct 02 18:38:07 crc kubenswrapper[4832]: I1002 18:38:07.101206 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7c58d4ffff-bz6pp" event={"ID":"37a13ba7-6567-4720-9a8d-ce1c3420bfb2","Type":"ContainerStarted","Data":"29128dc0a78e5fa7b39b2208e674c1e8086c3e8f8496ca4179fc8fa419cd7884"} Oct 02 18:38:07 crc kubenswrapper[4832]: I1002 18:38:07.101780 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7c58d4ffff-bz6pp" Oct 02 18:38:07 crc kubenswrapper[4832]: I1002 18:38:07.149533 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7c58d4ffff-bz6pp" podStartSLOduration=1.66244986 podStartE2EDuration="8.149517317s" podCreationTimestamp="2025-10-02 18:37:59 +0000 UTC" firstStartedPulling="2025-10-02 18:38:00.203530811 +0000 UTC m=+1037.172973683" lastFinishedPulling="2025-10-02 18:38:06.690598268 +0000 UTC m=+1043.660041140" observedRunningTime="2025-10-02 18:38:07.142515646 +0000 UTC m=+1044.111958518" watchObservedRunningTime="2025-10-02 18:38:07.149517317 +0000 UTC m=+1044.118960189" Oct 02 18:38:09 crc kubenswrapper[4832]: I1002 18:38:09.684905 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7c58d4ffff-bz6pp" Oct 02 18:38:26 crc kubenswrapper[4832]: I1002 18:38:26.875584 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:38:26 crc kubenswrapper[4832]: I1002 18:38:26.876219 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:38:26 crc kubenswrapper[4832]: I1002 18:38:26.876298 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:38:26 crc kubenswrapper[4832]: I1002 18:38:26.877049 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65d3e2d93b30b70c1447cb55a9f5b7b0ff104d9e0a2d6e88b49ea2c6960bc4e2"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 18:38:26 crc kubenswrapper[4832]: I1002 18:38:26.877107 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://65d3e2d93b30b70c1447cb55a9f5b7b0ff104d9e0a2d6e88b49ea2c6960bc4e2" gracePeriod=600 Oct 02 18:38:29 crc kubenswrapper[4832]: I1002 18:38:29.299594 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="65d3e2d93b30b70c1447cb55a9f5b7b0ff104d9e0a2d6e88b49ea2c6960bc4e2" exitCode=0 Oct 02 18:38:29 crc kubenswrapper[4832]: I1002 18:38:29.299635 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"65d3e2d93b30b70c1447cb55a9f5b7b0ff104d9e0a2d6e88b49ea2c6960bc4e2"} Oct 02 18:38:29 crc kubenswrapper[4832]: I1002 18:38:29.300025 4832 scope.go:117] "RemoveContainer" containerID="39c61194f4de266798fee5bed294464e772af0f2983e7b44f1e151219ed48151" Oct 02 18:38:30 crc kubenswrapper[4832]: I1002 18:38:30.308665 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"c688f74c22f81ea3d61106cf0c7f62698937fd7d0fad6673fa18a7fd31c7b079"} Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.185154 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6c675fb79f-szf5c"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.187295 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6c675fb79f-szf5c" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.189133 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zdkmc" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.199895 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-sqc96"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.201748 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-sqc96" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.213080 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ss7t4" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.215399 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-846dff85b5-9pnvm"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.216739 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-846dff85b5-9pnvm" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.219494 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-kgksr" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.222832 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6c675fb79f-szf5c"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.229120 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-sqc96"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.248481 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-846dff85b5-9pnvm"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.255549 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79d68d6c85-98l5q"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.257158 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79d68d6c85-98l5q" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.259142 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-kv8w2" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.263969 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-599898f689-rnsmf"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.268638 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-599898f689-rnsmf" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.292746 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-gthvd" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.304358 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6769b867d9-zsnms"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.305920 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6769b867d9-zsnms" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.310924 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-2pb6s" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.312387 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfk4z\" (UniqueName: \"kubernetes.io/projected/7a36740f-eefd-4d9d-afe2-491d02a75fa6-kube-api-access-lfk4z\") pod \"heat-operator-controller-manager-599898f689-rnsmf\" (UID: \"7a36740f-eefd-4d9d-afe2-491d02a75fa6\") " pod="openstack-operators/heat-operator-controller-manager-599898f689-rnsmf" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.312494 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzh64\" (UniqueName: \"kubernetes.io/projected/3a910552-db07-45ab-9f11-5b5051a1d070-kube-api-access-pzh64\") pod \"cinder-operator-controller-manager-79d68d6c85-98l5q\" (UID: \"3a910552-db07-45ab-9f11-5b5051a1d070\") " pod="openstack-operators/cinder-operator-controller-manager-79d68d6c85-98l5q" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.312557 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zx72\" (UniqueName: \"kubernetes.io/projected/2cca84e4-3eb8-41c8-95db-f5b755e83758-kube-api-access-2zx72\") pod \"barbican-operator-controller-manager-6c675fb79f-szf5c\" (UID: \"2cca84e4-3eb8-41c8-95db-f5b755e83758\") " pod="openstack-operators/barbican-operator-controller-manager-6c675fb79f-szf5c" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.312613 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9sks\" (UniqueName: \"kubernetes.io/projected/0fa21051-6127-497b-a7dc-f4156314397e-kube-api-access-l9sks\") pod \"glance-operator-controller-manager-846dff85b5-9pnvm\" (UID: \"0fa21051-6127-497b-a7dc-f4156314397e\") " pod="openstack-operators/glance-operator-controller-manager-846dff85b5-9pnvm" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.312653 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv4rw\" (UniqueName: \"kubernetes.io/projected/d06661d7-5a41-4954-bfe4-8d25a9aa49d1-kube-api-access-cv4rw\") pod \"designate-operator-controller-manager-75dfd9b554-sqc96\" (UID: \"d06661d7-5a41-4954-bfe4-8d25a9aa49d1\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-sqc96" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.377002 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79d68d6c85-98l5q"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.420971 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfk4z\" (UniqueName: \"kubernetes.io/projected/7a36740f-eefd-4d9d-afe2-491d02a75fa6-kube-api-access-lfk4z\") pod \"heat-operator-controller-manager-599898f689-rnsmf\" (UID: \"7a36740f-eefd-4d9d-afe2-491d02a75fa6\") " pod="openstack-operators/heat-operator-controller-manager-599898f689-rnsmf" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.421048 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzh64\" (UniqueName: \"kubernetes.io/projected/3a910552-db07-45ab-9f11-5b5051a1d070-kube-api-access-pzh64\") pod \"cinder-operator-controller-manager-79d68d6c85-98l5q\" (UID: \"3a910552-db07-45ab-9f11-5b5051a1d070\") " pod="openstack-operators/cinder-operator-controller-manager-79d68d6c85-98l5q" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.421093 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zx72\" (UniqueName: \"kubernetes.io/projected/2cca84e4-3eb8-41c8-95db-f5b755e83758-kube-api-access-2zx72\") pod \"barbican-operator-controller-manager-6c675fb79f-szf5c\" (UID: \"2cca84e4-3eb8-41c8-95db-f5b755e83758\") " pod="openstack-operators/barbican-operator-controller-manager-6c675fb79f-szf5c" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.421132 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9sks\" (UniqueName: \"kubernetes.io/projected/0fa21051-6127-497b-a7dc-f4156314397e-kube-api-access-l9sks\") pod \"glance-operator-controller-manager-846dff85b5-9pnvm\" (UID: \"0fa21051-6127-497b-a7dc-f4156314397e\") " pod="openstack-operators/glance-operator-controller-manager-846dff85b5-9pnvm" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.421163 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv4rw\" (UniqueName: \"kubernetes.io/projected/d06661d7-5a41-4954-bfe4-8d25a9aa49d1-kube-api-access-cv4rw\") pod \"designate-operator-controller-manager-75dfd9b554-sqc96\" (UID: \"d06661d7-5a41-4954-bfe4-8d25a9aa49d1\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-sqc96" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.421201 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lngfr\" (UniqueName: \"kubernetes.io/projected/b66dc2a1-b115-4952-99ce-866046ca9ea5-kube-api-access-lngfr\") pod \"horizon-operator-controller-manager-6769b867d9-zsnms\" (UID: \"b66dc2a1-b115-4952-99ce-866046ca9ea5\") " pod="openstack-operators/horizon-operator-controller-manager-6769b867d9-zsnms" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.448211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfk4z\" (UniqueName: \"kubernetes.io/projected/7a36740f-eefd-4d9d-afe2-491d02a75fa6-kube-api-access-lfk4z\") pod \"heat-operator-controller-manager-599898f689-rnsmf\" (UID: \"7a36740f-eefd-4d9d-afe2-491d02a75fa6\") " pod="openstack-operators/heat-operator-controller-manager-599898f689-rnsmf" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.454167 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9sks\" (UniqueName: \"kubernetes.io/projected/0fa21051-6127-497b-a7dc-f4156314397e-kube-api-access-l9sks\") pod \"glance-operator-controller-manager-846dff85b5-9pnvm\" (UID: \"0fa21051-6127-497b-a7dc-f4156314397e\") " pod="openstack-operators/glance-operator-controller-manager-846dff85b5-9pnvm" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.454300 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-599898f689-rnsmf"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.459789 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zx72\" (UniqueName: \"kubernetes.io/projected/2cca84e4-3eb8-41c8-95db-f5b755e83758-kube-api-access-2zx72\") pod \"barbican-operator-controller-manager-6c675fb79f-szf5c\" (UID: \"2cca84e4-3eb8-41c8-95db-f5b755e83758\") " pod="openstack-operators/barbican-operator-controller-manager-6c675fb79f-szf5c" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.515496 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv4rw\" (UniqueName: \"kubernetes.io/projected/d06661d7-5a41-4954-bfe4-8d25a9aa49d1-kube-api-access-cv4rw\") pod \"designate-operator-controller-manager-75dfd9b554-sqc96\" (UID: \"d06661d7-5a41-4954-bfe4-8d25a9aa49d1\") " pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-sqc96" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.516842 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzh64\" (UniqueName: \"kubernetes.io/projected/3a910552-db07-45ab-9f11-5b5051a1d070-kube-api-access-pzh64\") pod \"cinder-operator-controller-manager-79d68d6c85-98l5q\" (UID: \"3a910552-db07-45ab-9f11-5b5051a1d070\") " pod="openstack-operators/cinder-operator-controller-manager-79d68d6c85-98l5q" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.516923 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6769b867d9-zsnms"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.520395 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6c675fb79f-szf5c" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.522210 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lngfr\" (UniqueName: \"kubernetes.io/projected/b66dc2a1-b115-4952-99ce-866046ca9ea5-kube-api-access-lngfr\") pod \"horizon-operator-controller-manager-6769b867d9-zsnms\" (UID: \"b66dc2a1-b115-4952-99ce-866046ca9ea5\") " pod="openstack-operators/horizon-operator-controller-manager-6769b867d9-zsnms" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.522370 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-sqc96" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.535426 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-84bc9db6cc-xf6p9"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.544369 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-846dff85b5-9pnvm" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.545149 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.546550 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.547110 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-84bc9db6cc-xf6p9" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.572775 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-f6nfn" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.573017 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.573213 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7xczt" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.583913 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lngfr\" (UniqueName: \"kubernetes.io/projected/b66dc2a1-b115-4952-99ce-866046ca9ea5-kube-api-access-lngfr\") pod \"horizon-operator-controller-manager-6769b867d9-zsnms\" (UID: \"b66dc2a1-b115-4952-99ce-866046ca9ea5\") " pod="openstack-operators/horizon-operator-controller-manager-6769b867d9-zsnms" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.590736 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-84bc9db6cc-xf6p9"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.600787 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79d68d6c85-98l5q" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.613687 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-599898f689-rnsmf" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.617786 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.625709 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vgk6\" (UniqueName: \"kubernetes.io/projected/93650652-02f0-403d-a9e6-6a71feb797c6-kube-api-access-6vgk6\") pod \"ironic-operator-controller-manager-84bc9db6cc-xf6p9\" (UID: \"93650652-02f0-403d-a9e6-6a71feb797c6\") " pod="openstack-operators/ironic-operator-controller-manager-84bc9db6cc-xf6p9" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.625837 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0835997a-eef2-4744-a6ed-dce8714f62f7-cert\") pod \"infra-operator-controller-manager-5fbf469cd7-262c9\" (UID: \"0835997a-eef2-4744-a6ed-dce8714f62f7\") " pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.625857 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd422\" (UniqueName: \"kubernetes.io/projected/0835997a-eef2-4744-a6ed-dce8714f62f7-kube-api-access-fd422\") pod \"infra-operator-controller-manager-5fbf469cd7-262c9\" (UID: \"0835997a-eef2-4744-a6ed-dce8714f62f7\") " pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.649178 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6769b867d9-zsnms" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.652539 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f55849f88-fv5h5"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.665217 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7f55849f88-fv5h5" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.667931 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jbxf2" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.727247 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0835997a-eef2-4744-a6ed-dce8714f62f7-cert\") pod \"infra-operator-controller-manager-5fbf469cd7-262c9\" (UID: \"0835997a-eef2-4744-a6ed-dce8714f62f7\") " pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" Oct 02 18:38:38 crc kubenswrapper[4832]: E1002 18:38:38.727960 4832 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.728408 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd422\" (UniqueName: \"kubernetes.io/projected/0835997a-eef2-4744-a6ed-dce8714f62f7-kube-api-access-fd422\") pod \"infra-operator-controller-manager-5fbf469cd7-262c9\" (UID: \"0835997a-eef2-4744-a6ed-dce8714f62f7\") " pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" Oct 02 18:38:38 crc kubenswrapper[4832]: E1002 18:38:38.728424 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0835997a-eef2-4744-a6ed-dce8714f62f7-cert podName:0835997a-eef2-4744-a6ed-dce8714f62f7 nodeName:}" failed. No retries permitted until 2025-10-02 18:38:39.228402972 +0000 UTC m=+1076.197845924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0835997a-eef2-4744-a6ed-dce8714f62f7-cert") pod "infra-operator-controller-manager-5fbf469cd7-262c9" (UID: "0835997a-eef2-4744-a6ed-dce8714f62f7") : secret "infra-operator-webhook-server-cert" not found Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.733509 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vgk6\" (UniqueName: \"kubernetes.io/projected/93650652-02f0-403d-a9e6-6a71feb797c6-kube-api-access-6vgk6\") pod \"ironic-operator-controller-manager-84bc9db6cc-xf6p9\" (UID: \"93650652-02f0-403d-a9e6-6a71feb797c6\") " pod="openstack-operators/ironic-operator-controller-manager-84bc9db6cc-xf6p9" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.751243 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd422\" (UniqueName: \"kubernetes.io/projected/0835997a-eef2-4744-a6ed-dce8714f62f7-kube-api-access-fd422\") pod \"infra-operator-controller-manager-5fbf469cd7-262c9\" (UID: \"0835997a-eef2-4744-a6ed-dce8714f62f7\") " pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.757222 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6fd6854b49-ts5sn"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.758394 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vgk6\" (UniqueName: \"kubernetes.io/projected/93650652-02f0-403d-a9e6-6a71feb797c6-kube-api-access-6vgk6\") pod \"ironic-operator-controller-manager-84bc9db6cc-xf6p9\" (UID: \"93650652-02f0-403d-a9e6-6a71feb797c6\") " pod="openstack-operators/ironic-operator-controller-manager-84bc9db6cc-xf6p9" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.761706 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6fd6854b49-ts5sn" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.775334 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gt7xv" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.818888 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f55849f88-fv5h5"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.825899 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6fd6854b49-ts5sn"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.833637 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5c468bf4d4-nb6x7"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.834996 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5c468bf4d4-nb6x7" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.836416 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfb95\" (UniqueName: \"kubernetes.io/projected/b2e208d4-436d-4e17-b4b3-165b130164c7-kube-api-access-bfb95\") pod \"keystone-operator-controller-manager-7f55849f88-fv5h5\" (UID: \"b2e208d4-436d-4e17-b4b3-165b130164c7\") " pod="openstack-operators/keystone-operator-controller-manager-7f55849f88-fv5h5" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.837773 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-76chc" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.849516 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5c468bf4d4-nb6x7"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.893894 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.914797 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.922703 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-dq2mt" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.938350 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pcd2\" (UniqueName: \"kubernetes.io/projected/5ae05766-702f-4f1d-a149-a01663fd2b53-kube-api-access-4pcd2\") pod \"manila-operator-controller-manager-6fd6854b49-ts5sn\" (UID: \"5ae05766-702f-4f1d-a149-a01663fd2b53\") " pod="openstack-operators/manila-operator-controller-manager-6fd6854b49-ts5sn" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.938475 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfb95\" (UniqueName: \"kubernetes.io/projected/b2e208d4-436d-4e17-b4b3-165b130164c7-kube-api-access-bfb95\") pod \"keystone-operator-controller-manager-7f55849f88-fv5h5\" (UID: \"b2e208d4-436d-4e17-b4b3-165b130164c7\") " pod="openstack-operators/keystone-operator-controller-manager-7f55849f88-fv5h5" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.938531 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8bcx\" (UniqueName: \"kubernetes.io/projected/6049f0ba-16e6-4773-bc16-d26b6e04364e-kube-api-access-z8bcx\") pod \"mariadb-operator-controller-manager-5c468bf4d4-nb6x7\" (UID: \"6049f0ba-16e6-4773-bc16-d26b6e04364e\") " pod="openstack-operators/mariadb-operator-controller-manager-5c468bf4d4-nb6x7" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.950351 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6"] Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.976091 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-84bc9db6cc-xf6p9" Oct 02 18:38:38 crc kubenswrapper[4832]: I1002 18:38:38.991071 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfb95\" (UniqueName: \"kubernetes.io/projected/b2e208d4-436d-4e17-b4b3-165b130164c7-kube-api-access-bfb95\") pod \"keystone-operator-controller-manager-7f55849f88-fv5h5\" (UID: \"b2e208d4-436d-4e17-b4b3-165b130164c7\") " pod="openstack-operators/keystone-operator-controller-manager-7f55849f88-fv5h5" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.000029 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6574bf987d-79nms"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.001491 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6574bf987d-79nms" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.008620 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xtrrd" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.023277 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.026028 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.026880 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7f55849f88-fv5h5" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.039092 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ccqg2" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.039937 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42ggh\" (UniqueName: \"kubernetes.io/projected/60b0fee3-0856-4087-ad87-0a4847e3613c-kube-api-access-42ggh\") pod \"nova-operator-controller-manager-555c7456bd-v8gc6\" (UID: \"60b0fee3-0856-4087-ad87-0a4847e3613c\") " pod="openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.039986 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8bcx\" (UniqueName: \"kubernetes.io/projected/6049f0ba-16e6-4773-bc16-d26b6e04364e-kube-api-access-z8bcx\") pod \"mariadb-operator-controller-manager-5c468bf4d4-nb6x7\" (UID: \"6049f0ba-16e6-4773-bc16-d26b6e04364e\") " pod="openstack-operators/mariadb-operator-controller-manager-5c468bf4d4-nb6x7" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.040064 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pcd2\" (UniqueName: \"kubernetes.io/projected/5ae05766-702f-4f1d-a149-a01663fd2b53-kube-api-access-4pcd2\") pod \"manila-operator-controller-manager-6fd6854b49-ts5sn\" (UID: \"5ae05766-702f-4f1d-a149-a01663fd2b53\") " pod="openstack-operators/manila-operator-controller-manager-6fd6854b49-ts5sn" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.047971 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6574bf987d-79nms"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.067867 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pcd2\" (UniqueName: \"kubernetes.io/projected/5ae05766-702f-4f1d-a149-a01663fd2b53-kube-api-access-4pcd2\") pod \"manila-operator-controller-manager-6fd6854b49-ts5sn\" (UID: \"5ae05766-702f-4f1d-a149-a01663fd2b53\") " pod="openstack-operators/manila-operator-controller-manager-6fd6854b49-ts5sn" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.082140 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.086813 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8bcx\" (UniqueName: \"kubernetes.io/projected/6049f0ba-16e6-4773-bc16-d26b6e04364e-kube-api-access-z8bcx\") pod \"mariadb-operator-controller-manager-5c468bf4d4-nb6x7\" (UID: \"6049f0ba-16e6-4773-bc16-d26b6e04364e\") " pod="openstack-operators/mariadb-operator-controller-manager-5c468bf4d4-nb6x7" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.093591 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.095050 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.100054 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.101378 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.104097 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-r7848" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.104286 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.104980 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-w8glv" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.108310 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.110033 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.112674 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-sd6wj" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.116176 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.120628 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6fd6854b49-ts5sn" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.142347 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfgnn\" (UniqueName: \"kubernetes.io/projected/36d178a5-f367-4534-ab1e-54c162ce2961-kube-api-access-rfgnn\") pod \"neutron-operator-controller-manager-6574bf987d-79nms\" (UID: \"36d178a5-f367-4534-ab1e-54c162ce2961\") " pod="openstack-operators/neutron-operator-controller-manager-6574bf987d-79nms" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.142452 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vswsh\" (UniqueName: \"kubernetes.io/projected/655a4d07-4b1f-420e-b676-8e5094960f64-kube-api-access-vswsh\") pod \"octavia-operator-controller-manager-59d6cfdf45-tsswz\" (UID: \"655a4d07-4b1f-420e-b676-8e5094960f64\") " pod="openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.142548 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42ggh\" (UniqueName: \"kubernetes.io/projected/60b0fee3-0856-4087-ad87-0a4847e3613c-kube-api-access-42ggh\") pod \"nova-operator-controller-manager-555c7456bd-v8gc6\" (UID: \"60b0fee3-0856-4087-ad87-0a4847e3613c\") " pod="openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.182104 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-97td6"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.186627 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5c468bf4d4-nb6x7" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.190844 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.193159 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-97td6" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.194458 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42ggh\" (UniqueName: \"kubernetes.io/projected/60b0fee3-0856-4087-ad87-0a4847e3613c-kube-api-access-42ggh\") pod \"nova-operator-controller-manager-555c7456bd-v8gc6\" (UID: \"60b0fee3-0856-4087-ad87-0a4847e3613c\") " pod="openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.198371 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-mpj6r" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.218115 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.244360 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2ltv\" (UniqueName: \"kubernetes.io/projected/30502d18-201c-4133-b25a-7b1e96ce21cf-kube-api-access-j2ltv\") pod \"openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr\" (UID: \"30502d18-201c-4133-b25a-7b1e96ce21cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.244418 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5hh6\" (UniqueName: \"kubernetes.io/projected/e8179b13-12b7-492d-bc86-f5543cfcbfbb-kube-api-access-m5hh6\") pod \"placement-operator-controller-manager-7d8bb7f44c-vqrd9\" (UID: \"e8179b13-12b7-492d-bc86-f5543cfcbfbb\") " pod="openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.244508 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30502d18-201c-4133-b25a-7b1e96ce21cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr\" (UID: \"30502d18-201c-4133-b25a-7b1e96ce21cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.244547 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vswsh\" (UniqueName: \"kubernetes.io/projected/655a4d07-4b1f-420e-b676-8e5094960f64-kube-api-access-vswsh\") pod \"octavia-operator-controller-manager-59d6cfdf45-tsswz\" (UID: \"655a4d07-4b1f-420e-b676-8e5094960f64\") " pod="openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.244592 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0835997a-eef2-4744-a6ed-dce8714f62f7-cert\") pod \"infra-operator-controller-manager-5fbf469cd7-262c9\" (UID: \"0835997a-eef2-4744-a6ed-dce8714f62f7\") " pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.244696 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-999lh\" (UniqueName: \"kubernetes.io/projected/8b694594-41bd-4e62-a202-951f85430ff6-kube-api-access-999lh\") pod \"ovn-operator-controller-manager-688db7b6c7-p9wql\" (UID: \"8b694594-41bd-4e62-a202-951f85430ff6\") " pod="openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.244724 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfgnn\" (UniqueName: \"kubernetes.io/projected/36d178a5-f367-4534-ab1e-54c162ce2961-kube-api-access-rfgnn\") pod \"neutron-operator-controller-manager-6574bf987d-79nms\" (UID: \"36d178a5-f367-4534-ab1e-54c162ce2961\") " pod="openstack-operators/neutron-operator-controller-manager-6574bf987d-79nms" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.251814 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.266123 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0835997a-eef2-4744-a6ed-dce8714f62f7-cert\") pod \"infra-operator-controller-manager-5fbf469cd7-262c9\" (UID: \"0835997a-eef2-4744-a6ed-dce8714f62f7\") " pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.274697 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vswsh\" (UniqueName: \"kubernetes.io/projected/655a4d07-4b1f-420e-b676-8e5094960f64-kube-api-access-vswsh\") pod \"octavia-operator-controller-manager-59d6cfdf45-tsswz\" (UID: \"655a4d07-4b1f-420e-b676-8e5094960f64\") " pod="openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.280699 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-97td6"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.280839 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.281955 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.289089 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-d7dql" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.291781 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.292433 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.297806 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfgnn\" (UniqueName: \"kubernetes.io/projected/36d178a5-f367-4534-ab1e-54c162ce2961-kube-api-access-rfgnn\") pod \"neutron-operator-controller-manager-6574bf987d-79nms\" (UID: \"36d178a5-f367-4534-ab1e-54c162ce2961\") " pod="openstack-operators/neutron-operator-controller-manager-6574bf987d-79nms" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.312477 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.322782 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-hzrfr" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.346509 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xnfj\" (UniqueName: \"kubernetes.io/projected/2ac2d023-64bc-4653-a8eb-2dd5ed49313c-kube-api-access-2xnfj\") pod \"swift-operator-controller-manager-6859f9b676-97td6\" (UID: \"2ac2d023-64bc-4653-a8eb-2dd5ed49313c\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-97td6" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.346588 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-999lh\" (UniqueName: \"kubernetes.io/projected/8b694594-41bd-4e62-a202-951f85430ff6-kube-api-access-999lh\") pod \"ovn-operator-controller-manager-688db7b6c7-p9wql\" (UID: \"8b694594-41bd-4e62-a202-951f85430ff6\") " pod="openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.346626 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2ltv\" (UniqueName: \"kubernetes.io/projected/30502d18-201c-4133-b25a-7b1e96ce21cf-kube-api-access-j2ltv\") pod \"openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr\" (UID: \"30502d18-201c-4133-b25a-7b1e96ce21cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.346647 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5hh6\" (UniqueName: \"kubernetes.io/projected/e8179b13-12b7-492d-bc86-f5543cfcbfbb-kube-api-access-m5hh6\") pod \"placement-operator-controller-manager-7d8bb7f44c-vqrd9\" (UID: \"e8179b13-12b7-492d-bc86-f5543cfcbfbb\") " pod="openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.346689 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccpg4\" (UniqueName: \"kubernetes.io/projected/e6f36bc2-bb15-47f7-9881-05f35c2c513c-kube-api-access-ccpg4\") pod \"telemetry-operator-controller-manager-769bf6645d-wj4tb\" (UID: \"e6f36bc2-bb15-47f7-9881-05f35c2c513c\") " pod="openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.346717 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30502d18-201c-4133-b25a-7b1e96ce21cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr\" (UID: \"30502d18-201c-4133-b25a-7b1e96ce21cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.363094 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30502d18-201c-4133-b25a-7b1e96ce21cf-cert\") pod \"openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr\" (UID: \"30502d18-201c-4133-b25a-7b1e96ce21cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.365509 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6574bf987d-79nms" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.365742 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.367349 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5hh6\" (UniqueName: \"kubernetes.io/projected/e8179b13-12b7-492d-bc86-f5543cfcbfbb-kube-api-access-m5hh6\") pod \"placement-operator-controller-manager-7d8bb7f44c-vqrd9\" (UID: \"e8179b13-12b7-492d-bc86-f5543cfcbfbb\") " pod="openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.369513 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-999lh\" (UniqueName: \"kubernetes.io/projected/8b694594-41bd-4e62-a202-951f85430ff6-kube-api-access-999lh\") pod \"ovn-operator-controller-manager-688db7b6c7-p9wql\" (UID: \"8b694594-41bd-4e62-a202-951f85430ff6\") " pod="openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.378423 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2ltv\" (UniqueName: \"kubernetes.io/projected/30502d18-201c-4133-b25a-7b1e96ce21cf-kube-api-access-j2ltv\") pod \"openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr\" (UID: \"30502d18-201c-4133-b25a-7b1e96ce21cf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.390341 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.392041 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.397901 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6pmjk" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.400463 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.404571 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.409750 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-sqc96" event={"ID":"d06661d7-5a41-4954-bfe4-8d25a9aa49d1","Type":"ContainerStarted","Data":"6c859c132e17751c2d84d8019877885ace9bb9be37b01ef93eeef030b976ea6e"} Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.416540 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6c675fb79f-szf5c" event={"ID":"2cca84e4-3eb8-41c8-95db-f5b755e83758","Type":"ContainerStarted","Data":"060f306c6e0bc17b1906c588567b912ff69ac5322335d6cff00bac15a8fe1eaa"} Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.428980 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.437997 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.439836 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.443417 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.443925 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-98lmr" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.452783 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnkss\" (UniqueName: \"kubernetes.io/projected/13334024-dee1-47bd-aebe-22df02b93ea0-kube-api-access-mnkss\") pod \"test-operator-controller-manager-5cd5cb47d7-frks4\" (UID: \"13334024-dee1-47bd-aebe-22df02b93ea0\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.452955 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccpg4\" (UniqueName: \"kubernetes.io/projected/e6f36bc2-bb15-47f7-9881-05f35c2c513c-kube-api-access-ccpg4\") pod \"telemetry-operator-controller-manager-769bf6645d-wj4tb\" (UID: \"e6f36bc2-bb15-47f7-9881-05f35c2c513c\") " pod="openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.453160 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xnfj\" (UniqueName: \"kubernetes.io/projected/2ac2d023-64bc-4653-a8eb-2dd5ed49313c-kube-api-access-2xnfj\") pod \"swift-operator-controller-manager-6859f9b676-97td6\" (UID: \"2ac2d023-64bc-4653-a8eb-2dd5ed49313c\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-97td6" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.465414 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.487091 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-59mvq"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.488588 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-59mvq" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.481467 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.501396 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xnfj\" (UniqueName: \"kubernetes.io/projected/2ac2d023-64bc-4653-a8eb-2dd5ed49313c-kube-api-access-2xnfj\") pod \"swift-operator-controller-manager-6859f9b676-97td6\" (UID: \"2ac2d023-64bc-4653-a8eb-2dd5ed49313c\") " pod="openstack-operators/swift-operator-controller-manager-6859f9b676-97td6" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.501758 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-59mvq"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.503737 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-9vphz" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.511804 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.517448 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccpg4\" (UniqueName: \"kubernetes.io/projected/e6f36bc2-bb15-47f7-9881-05f35c2c513c-kube-api-access-ccpg4\") pod \"telemetry-operator-controller-manager-769bf6645d-wj4tb\" (UID: \"e6f36bc2-bb15-47f7-9881-05f35c2c513c\") " pod="openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.539949 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-97td6" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.540770 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.554338 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnkss\" (UniqueName: \"kubernetes.io/projected/13334024-dee1-47bd-aebe-22df02b93ea0-kube-api-access-mnkss\") pod \"test-operator-controller-manager-5cd5cb47d7-frks4\" (UID: \"13334024-dee1-47bd-aebe-22df02b93ea0\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.554403 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2kfc\" (UniqueName: \"kubernetes.io/projected/47968f19-fabf-423c-9cf6-1d8b57654e3f-kube-api-access-w2kfc\") pod \"openstack-operator-controller-manager-7bffff79d9-sj2rb\" (UID: \"47968f19-fabf-423c-9cf6-1d8b57654e3f\") " pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.554444 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47968f19-fabf-423c-9cf6-1d8b57654e3f-cert\") pod \"openstack-operator-controller-manager-7bffff79d9-sj2rb\" (UID: \"47968f19-fabf-423c-9cf6-1d8b57654e3f\") " pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.554500 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gwp2\" (UniqueName: \"kubernetes.io/projected/45bbf7cb-04fb-4076-af85-0cecd610a929-kube-api-access-5gwp2\") pod \"watcher-operator-controller-manager-fcd7d9895-plmxx\" (UID: \"45bbf7cb-04fb-4076-af85-0cecd610a929\") " pod="openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.554540 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47r7w\" (UniqueName: \"kubernetes.io/projected/6ad88169-8b29-4078-90a5-759d2cb18325-kube-api-access-47r7w\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-59mvq\" (UID: \"6ad88169-8b29-4078-90a5-759d2cb18325\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-59mvq" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.590848 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnkss\" (UniqueName: \"kubernetes.io/projected/13334024-dee1-47bd-aebe-22df02b93ea0-kube-api-access-mnkss\") pod \"test-operator-controller-manager-5cd5cb47d7-frks4\" (UID: \"13334024-dee1-47bd-aebe-22df02b93ea0\") " pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.613775 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6c675fb79f-szf5c"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.628379 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.636314 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-75dfd9b554-sqc96"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.640857 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.667905 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47r7w\" (UniqueName: \"kubernetes.io/projected/6ad88169-8b29-4078-90a5-759d2cb18325-kube-api-access-47r7w\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-59mvq\" (UID: \"6ad88169-8b29-4078-90a5-759d2cb18325\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-59mvq" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.668455 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2kfc\" (UniqueName: \"kubernetes.io/projected/47968f19-fabf-423c-9cf6-1d8b57654e3f-kube-api-access-w2kfc\") pod \"openstack-operator-controller-manager-7bffff79d9-sj2rb\" (UID: \"47968f19-fabf-423c-9cf6-1d8b57654e3f\") " pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.668557 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47968f19-fabf-423c-9cf6-1d8b57654e3f-cert\") pod \"openstack-operator-controller-manager-7bffff79d9-sj2rb\" (UID: \"47968f19-fabf-423c-9cf6-1d8b57654e3f\") " pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.668661 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gwp2\" (UniqueName: \"kubernetes.io/projected/45bbf7cb-04fb-4076-af85-0cecd610a929-kube-api-access-5gwp2\") pod \"watcher-operator-controller-manager-fcd7d9895-plmxx\" (UID: \"45bbf7cb-04fb-4076-af85-0cecd610a929\") " pod="openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx" Oct 02 18:38:39 crc kubenswrapper[4832]: E1002 18:38:39.668812 4832 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 02 18:38:39 crc kubenswrapper[4832]: E1002 18:38:39.668871 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47968f19-fabf-423c-9cf6-1d8b57654e3f-cert podName:47968f19-fabf-423c-9cf6-1d8b57654e3f nodeName:}" failed. No retries permitted until 2025-10-02 18:38:40.168854748 +0000 UTC m=+1077.138297620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47968f19-fabf-423c-9cf6-1d8b57654e3f-cert") pod "openstack-operator-controller-manager-7bffff79d9-sj2rb" (UID: "47968f19-fabf-423c-9cf6-1d8b57654e3f") : secret "webhook-server-cert" not found Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.720559 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47r7w\" (UniqueName: \"kubernetes.io/projected/6ad88169-8b29-4078-90a5-759d2cb18325-kube-api-access-47r7w\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-59mvq\" (UID: \"6ad88169-8b29-4078-90a5-759d2cb18325\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-59mvq" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.732153 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gwp2\" (UniqueName: \"kubernetes.io/projected/45bbf7cb-04fb-4076-af85-0cecd610a929-kube-api-access-5gwp2\") pod \"watcher-operator-controller-manager-fcd7d9895-plmxx\" (UID: \"45bbf7cb-04fb-4076-af85-0cecd610a929\") " pod="openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.732459 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2kfc\" (UniqueName: \"kubernetes.io/projected/47968f19-fabf-423c-9cf6-1d8b57654e3f-kube-api-access-w2kfc\") pod \"openstack-operator-controller-manager-7bffff79d9-sj2rb\" (UID: \"47968f19-fabf-423c-9cf6-1d8b57654e3f\") " pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.796150 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-59mvq" Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.796587 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-846dff85b5-9pnvm"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.801966 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79d68d6c85-98l5q"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.809504 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-599898f689-rnsmf"] Oct 02 18:38:39 crc kubenswrapper[4832]: I1002 18:38:39.817355 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6769b867d9-zsnms"] Oct 02 18:38:39 crc kubenswrapper[4832]: W1002 18:38:39.838226 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a36740f_eefd_4d9d_afe2_491d02a75fa6.slice/crio-35bad0a101baecdd3f91c54afa258cbc74111fe4b9c9ea101850755443c61e8e WatchSource:0}: Error finding container 35bad0a101baecdd3f91c54afa258cbc74111fe4b9c9ea101850755443c61e8e: Status 404 returned error can't find the container with id 35bad0a101baecdd3f91c54afa258cbc74111fe4b9c9ea101850755443c61e8e Oct 02 18:38:39 crc kubenswrapper[4832]: W1002 18:38:39.840763 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb66dc2a1_b115_4952_99ce_866046ca9ea5.slice/crio-1ab595aa17d4cc4df40aba275c4eaf017bdb25b710b7c2a52e6ac7d6198363e5 WatchSource:0}: Error finding container 1ab595aa17d4cc4df40aba275c4eaf017bdb25b710b7c2a52e6ac7d6198363e5: Status 404 returned error can't find the container with id 1ab595aa17d4cc4df40aba275c4eaf017bdb25b710b7c2a52e6ac7d6198363e5 Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.016218 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx" Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.181733 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47968f19-fabf-423c-9cf6-1d8b57654e3f-cert\") pod \"openstack-operator-controller-manager-7bffff79d9-sj2rb\" (UID: \"47968f19-fabf-423c-9cf6-1d8b57654e3f\") " pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.193011 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47968f19-fabf-423c-9cf6-1d8b57654e3f-cert\") pod \"openstack-operator-controller-manager-7bffff79d9-sj2rb\" (UID: \"47968f19-fabf-423c-9cf6-1d8b57654e3f\") " pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.236550 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-84bc9db6cc-xf6p9"] Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.255431 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5c468bf4d4-nb6x7"] Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.291349 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6fd6854b49-ts5sn"] Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.312627 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f55849f88-fv5h5"] Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.364192 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6574bf987d-79nms"] Oct 02 18:38:40 crc kubenswrapper[4832]: W1002 18:38:40.365625 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ae05766_702f_4f1d_a149_a01663fd2b53.slice/crio-4eb366fdb55fd38fd10ea720e60d0573db26ed315b9bb79cf5c31d1430755359 WatchSource:0}: Error finding container 4eb366fdb55fd38fd10ea720e60d0573db26ed315b9bb79cf5c31d1430755359: Status 404 returned error can't find the container with id 4eb366fdb55fd38fd10ea720e60d0573db26ed315b9bb79cf5c31d1430755359 Oct 02 18:38:40 crc kubenswrapper[4832]: W1002 18:38:40.368087 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36d178a5_f367_4534_ab1e_54c162ce2961.slice/crio-314b8988314ffa90f1e5cef3e90a9adc041bbf5b3eccb12df66e17e315815a8c WatchSource:0}: Error finding container 314b8988314ffa90f1e5cef3e90a9adc041bbf5b3eccb12df66e17e315815a8c: Status 404 returned error can't find the container with id 314b8988314ffa90f1e5cef3e90a9adc041bbf5b3eccb12df66e17e315815a8c Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.384553 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.425939 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6769b867d9-zsnms" event={"ID":"b66dc2a1-b115-4952-99ce-866046ca9ea5","Type":"ContainerStarted","Data":"1ab595aa17d4cc4df40aba275c4eaf017bdb25b710b7c2a52e6ac7d6198363e5"} Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.426883 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7f55849f88-fv5h5" event={"ID":"b2e208d4-436d-4e17-b4b3-165b130164c7","Type":"ContainerStarted","Data":"8115472c63603fbe0cdf11ba4995061d8e53aa4306fe54869b75ad9966671b05"} Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.429369 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-846dff85b5-9pnvm" event={"ID":"0fa21051-6127-497b-a7dc-f4156314397e","Type":"ContainerStarted","Data":"cb32fa0fd19eda90217b71b4132a000bfd773851e79f3c145a6a1f372db902a5"} Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.430461 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-599898f689-rnsmf" event={"ID":"7a36740f-eefd-4d9d-afe2-491d02a75fa6","Type":"ContainerStarted","Data":"35bad0a101baecdd3f91c54afa258cbc74111fe4b9c9ea101850755443c61e8e"} Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.431177 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5c468bf4d4-nb6x7" event={"ID":"6049f0ba-16e6-4773-bc16-d26b6e04364e","Type":"ContainerStarted","Data":"6811a21c4889bd14c826c8abfb7c8e0ca50c6a29f83387c8bf7a83c2154d1876"} Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.433863 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-84bc9db6cc-xf6p9" event={"ID":"93650652-02f0-403d-a9e6-6a71feb797c6","Type":"ContainerStarted","Data":"295ab5da29456a292dded8234f20fd10b55e156bd03b043c4ca9ef197a3ed05a"} Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.438183 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6574bf987d-79nms" event={"ID":"36d178a5-f367-4534-ab1e-54c162ce2961","Type":"ContainerStarted","Data":"314b8988314ffa90f1e5cef3e90a9adc041bbf5b3eccb12df66e17e315815a8c"} Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.440057 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79d68d6c85-98l5q" event={"ID":"3a910552-db07-45ab-9f11-5b5051a1d070","Type":"ContainerStarted","Data":"00bedc539f74617444f73df744d2fd205c263ed3fcfb4e4ad03b7b4574c8efac"} Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.447758 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6fd6854b49-ts5sn" event={"ID":"5ae05766-702f-4f1d-a149-a01663fd2b53","Type":"ContainerStarted","Data":"4eb366fdb55fd38fd10ea720e60d0573db26ed315b9bb79cf5c31d1430755359"} Oct 02 18:38:40 crc kubenswrapper[4832]: W1002 18:38:40.932394 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8179b13_12b7_492d_bc86_f5543cfcbfbb.slice/crio-b01da33aa50ee205891a8cfcbaf04f5b09017662d1cf7168b30e61732c0a195d WatchSource:0}: Error finding container b01da33aa50ee205891a8cfcbaf04f5b09017662d1cf7168b30e61732c0a195d: Status 404 returned error can't find the container with id b01da33aa50ee205891a8cfcbaf04f5b09017662d1cf7168b30e61732c0a195d Oct 02 18:38:40 crc kubenswrapper[4832]: W1002 18:38:40.938378 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod655a4d07_4b1f_420e_b676_8e5094960f64.slice/crio-3e23e6914115d6a5b20fbe3673c28af6594942a4caf76bef5103fe9b9c098d45 WatchSource:0}: Error finding container 3e23e6914115d6a5b20fbe3673c28af6594942a4caf76bef5103fe9b9c098d45: Status 404 returned error can't find the container with id 3e23e6914115d6a5b20fbe3673c28af6594942a4caf76bef5103fe9b9c098d45 Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.948314 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9"] Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.960202 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6"] Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.969034 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb"] Oct 02 18:38:40 crc kubenswrapper[4832]: W1002 18:38:40.972370 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b694594_41bd_4e62_a202_951f85430ff6.slice/crio-e52c78c3437d4f481f03c5d054e12b30199aa9a56c7e081c2e1ab290102f3e41 WatchSource:0}: Error finding container e52c78c3437d4f481f03c5d054e12b30199aa9a56c7e081c2e1ab290102f3e41: Status 404 returned error can't find the container with id e52c78c3437d4f481f03c5d054e12b30199aa9a56c7e081c2e1ab290102f3e41 Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.975681 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz"] Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.990133 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql"] Oct 02 18:38:40 crc kubenswrapper[4832]: I1002 18:38:40.999513 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4"] Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.009038 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6859f9b676-97td6"] Oct 02 18:38:41 crc kubenswrapper[4832]: W1002 18:38:41.010639 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13334024_dee1_47bd_aebe_22df02b93ea0.slice/crio-d489d0e2a4bea094cb1c965ad95c1c118d1140f99134aab72ac2024ce0f56727 WatchSource:0}: Error finding container d489d0e2a4bea094cb1c965ad95c1c118d1140f99134aab72ac2024ce0f56727: Status 404 returned error can't find the container with id d489d0e2a4bea094cb1c965ad95c1c118d1140f99134aab72ac2024ce0f56727 Oct 02 18:38:41 crc kubenswrapper[4832]: W1002 18:38:41.014545 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ad88169_8b29_4078_90a5_759d2cb18325.slice/crio-e4fb144c9727156da03585f005549668a1e806cc5baa25e58b2845d5295d4ed0 WatchSource:0}: Error finding container e4fb144c9727156da03585f005549668a1e806cc5baa25e58b2845d5295d4ed0: Status 404 returned error can't find the container with id e4fb144c9727156da03585f005549668a1e806cc5baa25e58b2845d5295d4ed0 Oct 02 18:38:41 crc kubenswrapper[4832]: W1002 18:38:41.014971 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30502d18_201c_4133_b25a_7b1e96ce21cf.slice/crio-1a8e96fc2a5c3d5c07f0f8f1500546f47ecbac630fed8b0be63b8de682a54d3a WatchSource:0}: Error finding container 1a8e96fc2a5c3d5c07f0f8f1500546f47ecbac630fed8b0be63b8de682a54d3a: Status 404 returned error can't find the container with id 1a8e96fc2a5c3d5c07f0f8f1500546f47ecbac630fed8b0be63b8de682a54d3a Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.017505 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr"] Oct 02 18:38:41 crc kubenswrapper[4832]: E1002 18:38:41.022608 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnkss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd5cb47d7-frks4_openstack-operators(13334024-dee1-47bd-aebe-22df02b93ea0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 02 18:38:41 crc kubenswrapper[4832]: E1002 18:38:41.029900 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:f50229c8a33fd581bccbe5f34bbaf3936c1b454802e755c9b48b40b76a8239ee,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2ltv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr_openstack-operators(30502d18-201c-4133-b25a-7b1e96ce21cf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 02 18:38:41 crc kubenswrapper[4832]: E1002 18:38:41.031597 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-47r7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-59mvq_openstack-operators(6ad88169-8b29-4078-90a5-759d2cb18325): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 02 18:38:41 crc kubenswrapper[4832]: E1002 18:38:41.033089 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-59mvq" podUID="6ad88169-8b29-4078-90a5-759d2cb18325" Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.033292 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-59mvq"] Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.053995 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9"] Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.085097 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx"] Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.120309 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb"] Oct 02 18:38:41 crc kubenswrapper[4832]: W1002 18:38:41.165575 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45bbf7cb_04fb_4076_af85_0cecd610a929.slice/crio-018662c60d4fdae4d08c4f9b38d718268b6e1d93a5bf6d6b0b8442a2d9a1ca2d WatchSource:0}: Error finding container 018662c60d4fdae4d08c4f9b38d718268b6e1d93a5bf6d6b0b8442a2d9a1ca2d: Status 404 returned error can't find the container with id 018662c60d4fdae4d08c4f9b38d718268b6e1d93a5bf6d6b0b8442a2d9a1ca2d Oct 02 18:38:41 crc kubenswrapper[4832]: W1002 18:38:41.168316 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47968f19_fabf_423c_9cf6_1d8b57654e3f.slice/crio-83c3785b13cd2412dc0b42fd4c99599f3fb8a610b57cc53a46cddfadc2d7f1ac WatchSource:0}: Error finding container 83c3785b13cd2412dc0b42fd4c99599f3fb8a610b57cc53a46cddfadc2d7f1ac: Status 404 returned error can't find the container with id 83c3785b13cd2412dc0b42fd4c99599f3fb8a610b57cc53a46cddfadc2d7f1ac Oct 02 18:38:41 crc kubenswrapper[4832]: E1002 18:38:41.170526 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:018151bd5ff830ec03c6b8e3d53cfb9456ca6e1e34793bdd4f7edd39a0146fa6,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5gwp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-fcd7d9895-plmxx_openstack-operators(45bbf7cb-04fb-4076-af85-0cecd610a929): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.461927 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-97td6" event={"ID":"2ac2d023-64bc-4653-a8eb-2dd5ed49313c","Type":"ContainerStarted","Data":"b4a0dea8160225fae7c7b84121968082a93b80ea67ec5eada627130af08c7434"} Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.463795 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx" event={"ID":"45bbf7cb-04fb-4076-af85-0cecd610a929","Type":"ContainerStarted","Data":"018662c60d4fdae4d08c4f9b38d718268b6e1d93a5bf6d6b0b8442a2d9a1ca2d"} Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.464911 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" event={"ID":"30502d18-201c-4133-b25a-7b1e96ce21cf","Type":"ContainerStarted","Data":"1a8e96fc2a5c3d5c07f0f8f1500546f47ecbac630fed8b0be63b8de682a54d3a"} Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.468660 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" event={"ID":"0835997a-eef2-4744-a6ed-dce8714f62f7","Type":"ContainerStarted","Data":"e21ca1a5be086e64f4ae5c0c5dd732d4201116d29a8799bd4263c755b4e37110"} Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.470916 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb" event={"ID":"e6f36bc2-bb15-47f7-9881-05f35c2c513c","Type":"ContainerStarted","Data":"9cdff0acf248d98cefedcc2f2a2da397d4e1ca27b2d12434102b60ea3838b1f7"} Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.475413 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql" event={"ID":"8b694594-41bd-4e62-a202-951f85430ff6","Type":"ContainerStarted","Data":"e52c78c3437d4f481f03c5d054e12b30199aa9a56c7e081c2e1ab290102f3e41"} Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.477841 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz" event={"ID":"655a4d07-4b1f-420e-b676-8e5094960f64","Type":"ContainerStarted","Data":"3e23e6914115d6a5b20fbe3673c28af6594942a4caf76bef5103fe9b9c098d45"} Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.480683 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4" event={"ID":"13334024-dee1-47bd-aebe-22df02b93ea0","Type":"ContainerStarted","Data":"d489d0e2a4bea094cb1c965ad95c1c118d1140f99134aab72ac2024ce0f56727"} Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.482214 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6" event={"ID":"60b0fee3-0856-4087-ad87-0a4847e3613c","Type":"ContainerStarted","Data":"28434a1a6ab501c5ba92edde3ee11494a9db98925c5892cea86c382a967555ec"} Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.485575 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9" event={"ID":"e8179b13-12b7-492d-bc86-f5543cfcbfbb","Type":"ContainerStarted","Data":"b01da33aa50ee205891a8cfcbaf04f5b09017662d1cf7168b30e61732c0a195d"} Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.489151 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-59mvq" event={"ID":"6ad88169-8b29-4078-90a5-759d2cb18325","Type":"ContainerStarted","Data":"e4fb144c9727156da03585f005549668a1e806cc5baa25e58b2845d5295d4ed0"} Oct 02 18:38:41 crc kubenswrapper[4832]: I1002 18:38:41.490323 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" event={"ID":"47968f19-fabf-423c-9cf6-1d8b57654e3f","Type":"ContainerStarted","Data":"83c3785b13cd2412dc0b42fd4c99599f3fb8a610b57cc53a46cddfadc2d7f1ac"} Oct 02 18:38:41 crc kubenswrapper[4832]: E1002 18:38:41.491790 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-59mvq" podUID="6ad88169-8b29-4078-90a5-759d2cb18325" Oct 02 18:38:42 crc kubenswrapper[4832]: E1002 18:38:42.501105 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-59mvq" podUID="6ad88169-8b29-4078-90a5-759d2cb18325" Oct 02 18:38:43 crc kubenswrapper[4832]: E1002 18:38:43.962156 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4" podUID="13334024-dee1-47bd-aebe-22df02b93ea0" Oct 02 18:38:44 crc kubenswrapper[4832]: I1002 18:38:44.534528 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4" event={"ID":"13334024-dee1-47bd-aebe-22df02b93ea0","Type":"ContainerStarted","Data":"2fea00d9c96f3a71631634345e9ab4ae0c8294e4b9a07d4d7feab85445519e2c"} Oct 02 18:38:44 crc kubenswrapper[4832]: E1002 18:38:44.537003 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4" podUID="13334024-dee1-47bd-aebe-22df02b93ea0" Oct 02 18:38:45 crc kubenswrapper[4832]: E1002 18:38:45.565239 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:0daf76cc40ab619ae266b11defcc1b65beb22d859369e7b1b04de9169089a4cb\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4" podUID="13334024-dee1-47bd-aebe-22df02b93ea0" Oct 02 18:38:58 crc kubenswrapper[4832]: E1002 18:38:58.038861 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:5c6ab93b78bd20eb7f1736751a59c1eb33fb06351339563dbefe49ccaaff6e94" Oct 02 18:38:58 crc kubenswrapper[4832]: E1002 18:38:58.039606 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5c6ab93b78bd20eb7f1736751a59c1eb33fb06351339563dbefe49ccaaff6e94,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-999lh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-688db7b6c7-p9wql_openstack-operators(8b694594-41bd-4e62-a202-951f85430ff6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:39:00 crc kubenswrapper[4832]: E1002 18:39:00.251013 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:40fb1819b6639807b77ef79448d35f1e4bfc1838a09d4f380e9fa0f755352475" Oct 02 18:39:00 crc kubenswrapper[4832]: E1002 18:39:00.251472 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:40fb1819b6639807b77ef79448d35f1e4bfc1838a09d4f380e9fa0f755352475,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fd422,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-5fbf469cd7-262c9_openstack-operators(0835997a-eef2-4744-a6ed-dce8714f62f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:39:00 crc kubenswrapper[4832]: E1002 18:39:00.751589 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:b7409dcf05c85eab205904d29d4276f8e927c772eba6363ecfa21ab10c4aaa01" Oct 02 18:39:00 crc kubenswrapper[4832]: E1002 18:39:00.752037 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:b7409dcf05c85eab205904d29d4276f8e927c772eba6363ecfa21ab10c4aaa01,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vswsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-59d6cfdf45-tsswz_openstack-operators(655a4d07-4b1f-420e-b676-8e5094960f64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:39:01 crc kubenswrapper[4832]: E1002 18:39:01.276854 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:a82409e6d6a5554aad95acfe6fa4784e33de19a963eb8b1da1a80a3e6cf1ab55" Oct 02 18:39:01 crc kubenswrapper[4832]: E1002 18:39:01.277136 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:a82409e6d6a5554aad95acfe6fa4784e33de19a963eb8b1da1a80a3e6cf1ab55,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-42ggh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-555c7456bd-v8gc6_openstack-operators(60b0fee3-0856-4087-ad87-0a4847e3613c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:39:01 crc kubenswrapper[4832]: E1002 18:39:01.760879 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:637bb7b9ac308bc1e323391a3593b824f688090a856c83385814c17a571b1eed" Oct 02 18:39:01 crc kubenswrapper[4832]: E1002 18:39:01.761068 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:637bb7b9ac308bc1e323391a3593b824f688090a856c83385814c17a571b1eed,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2xnfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6859f9b676-97td6_openstack-operators(2ac2d023-64bc-4653-a8eb-2dd5ed49313c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:39:02 crc kubenswrapper[4832]: E1002 18:39:02.684867 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:725da67b3f9cf2758564e0111928cdd570c0f6f1ca34775f159bbe94deb82548" Oct 02 18:39:02 crc kubenswrapper[4832]: E1002 18:39:02.685350 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:725da67b3f9cf2758564e0111928cdd570c0f6f1ca34775f159bbe94deb82548,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m5hh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-7d8bb7f44c-vqrd9_openstack-operators(e8179b13-12b7-492d-bc86-f5543cfcbfbb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:39:02 crc kubenswrapper[4832]: E1002 18:39:02.861105 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.39:5001/openstack-k8s-operators/telemetry-operator:39199b16fb4ef65d07a91339e59f624612b96660" Oct 02 18:39:02 crc kubenswrapper[4832]: E1002 18:39:02.861182 4832 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.39:5001/openstack-k8s-operators/telemetry-operator:39199b16fb4ef65d07a91339e59f624612b96660" Oct 02 18:39:02 crc kubenswrapper[4832]: E1002 18:39:02.861412 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.39:5001/openstack-k8s-operators/telemetry-operator:39199b16fb4ef65d07a91339e59f624612b96660,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ccpg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-769bf6645d-wj4tb_openstack-operators(e6f36bc2-bb15-47f7-9881-05f35c2c513c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:39:03 crc kubenswrapper[4832]: E1002 18:39:03.022536 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" podUID="30502d18-201c-4133-b25a-7b1e96ce21cf" Oct 02 18:39:03 crc kubenswrapper[4832]: I1002 18:39:03.729430 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" event={"ID":"30502d18-201c-4133-b25a-7b1e96ce21cf","Type":"ContainerStarted","Data":"89a670f7eed7cd6327043e6a8197c6a049335227b60c9b19a751861048353365"} Oct 02 18:39:03 crc kubenswrapper[4832]: E1002 18:39:03.806349 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql" podUID="8b694594-41bd-4e62-a202-951f85430ff6" Oct 02 18:39:03 crc kubenswrapper[4832]: E1002 18:39:03.823061 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx" podUID="45bbf7cb-04fb-4076-af85-0cecd610a929" Oct 02 18:39:04 crc kubenswrapper[4832]: E1002 18:39:04.552809 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb" podUID="e6f36bc2-bb15-47f7-9881-05f35c2c513c" Oct 02 18:39:04 crc kubenswrapper[4832]: E1002 18:39:04.554892 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" podUID="0835997a-eef2-4744-a6ed-dce8714f62f7" Oct 02 18:39:04 crc kubenswrapper[4832]: I1002 18:39:04.742501 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79d68d6c85-98l5q" event={"ID":"3a910552-db07-45ab-9f11-5b5051a1d070","Type":"ContainerStarted","Data":"1a8a74626ec2d90a72b5480cbc8b3111fbf83c42897644bda60d9f67c85cb09e"} Oct 02 18:39:04 crc kubenswrapper[4832]: I1002 18:39:04.749159 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx" event={"ID":"45bbf7cb-04fb-4076-af85-0cecd610a929","Type":"ContainerStarted","Data":"4a7f11a53c012928be9742166e509fca342178cc2384df67a8d46688ee1976ba"} Oct 02 18:39:04 crc kubenswrapper[4832]: I1002 18:39:04.755630 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" event={"ID":"0835997a-eef2-4744-a6ed-dce8714f62f7","Type":"ContainerStarted","Data":"73f9c8cc1360a25e4504ce90013b28df65d96adb9c6d89eab3abc5bcc3b9600f"} Oct 02 18:39:04 crc kubenswrapper[4832]: I1002 18:39:04.758747 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb" event={"ID":"e6f36bc2-bb15-47f7-9881-05f35c2c513c","Type":"ContainerStarted","Data":"d2eccebd8de3372148009c6516d612bdb07350aff0907905ee98a46d4c5564a5"} Oct 02 18:39:04 crc kubenswrapper[4832]: E1002 18:39:04.759173 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:40fb1819b6639807b77ef79448d35f1e4bfc1838a09d4f380e9fa0f755352475\\\"\"" pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" podUID="0835997a-eef2-4744-a6ed-dce8714f62f7" Oct 02 18:39:04 crc kubenswrapper[4832]: I1002 18:39:04.763828 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql" event={"ID":"8b694594-41bd-4e62-a202-951f85430ff6","Type":"ContainerStarted","Data":"fce4b71397c7d41761a5012c679cc1d077122a754572b9d72bcdea93bbdf8837"} Oct 02 18:39:04 crc kubenswrapper[4832]: E1002 18:39:04.767749 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.39:5001/openstack-k8s-operators/telemetry-operator:39199b16fb4ef65d07a91339e59f624612b96660\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb" podUID="e6f36bc2-bb15-47f7-9881-05f35c2c513c" Oct 02 18:39:04 crc kubenswrapper[4832]: E1002 18:39:04.780282 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-97td6" podUID="2ac2d023-64bc-4653-a8eb-2dd5ed49313c" Oct 02 18:39:04 crc kubenswrapper[4832]: I1002 18:39:04.798142 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" event={"ID":"47968f19-fabf-423c-9cf6-1d8b57654e3f","Type":"ContainerStarted","Data":"c8aaa13886e9088eb30fd528a05a535e34990aca41db8a1e1ec7030e8e3a5a6f"} Oct 02 18:39:04 crc kubenswrapper[4832]: I1002 18:39:04.806093 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-97td6" event={"ID":"2ac2d023-64bc-4653-a8eb-2dd5ed49313c","Type":"ContainerStarted","Data":"fcc9e6ae7744a0ec3b3757ea0b62ec0940ebe1c20e19eb97fa477f2a33708c94"} Oct 02 18:39:04 crc kubenswrapper[4832]: E1002 18:39:04.828037 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5c6ab93b78bd20eb7f1736751a59c1eb33fb06351339563dbefe49ccaaff6e94\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql" podUID="8b694594-41bd-4e62-a202-951f85430ff6" Oct 02 18:39:04 crc kubenswrapper[4832]: E1002 18:39:04.842043 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:637bb7b9ac308bc1e323391a3593b824f688090a856c83385814c17a571b1eed\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-97td6" podUID="2ac2d023-64bc-4653-a8eb-2dd5ed49313c" Oct 02 18:39:05 crc kubenswrapper[4832]: E1002 18:39:05.480485 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6" podUID="60b0fee3-0856-4087-ad87-0a4847e3613c" Oct 02 18:39:05 crc kubenswrapper[4832]: E1002 18:39:05.481358 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9" podUID="e8179b13-12b7-492d-bc86-f5543cfcbfbb" Oct 02 18:39:05 crc kubenswrapper[4832]: E1002 18:39:05.487987 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz" podUID="655a4d07-4b1f-420e-b676-8e5094960f64" Oct 02 18:39:05 crc kubenswrapper[4832]: I1002 18:39:05.836559 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-59mvq" event={"ID":"6ad88169-8b29-4078-90a5-759d2cb18325","Type":"ContainerStarted","Data":"c6f2a44abd93f95f23940fe5259f87b3d8a7bf4b240bcdbdc21619d0993ba895"} Oct 02 18:39:05 crc kubenswrapper[4832]: I1002 18:39:05.889016 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6" event={"ID":"60b0fee3-0856-4087-ad87-0a4847e3613c","Type":"ContainerStarted","Data":"9823db47adf2c17cde3c2efaff61dbe53034090f4db0e5a2827a85513d98b48f"} Oct 02 18:39:05 crc kubenswrapper[4832]: E1002 18:39:05.913991 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:a82409e6d6a5554aad95acfe6fa4784e33de19a963eb8b1da1a80a3e6cf1ab55\\\"\"" pod="openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6" podUID="60b0fee3-0856-4087-ad87-0a4847e3613c" Oct 02 18:39:05 crc kubenswrapper[4832]: I1002 18:39:05.923556 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-846dff85b5-9pnvm" event={"ID":"0fa21051-6127-497b-a7dc-f4156314397e","Type":"ContainerStarted","Data":"bdac82a60eb14a2e5f778ee1ec76d841a980bb84b9e2b0718c8736cd3938a0e2"} Oct 02 18:39:05 crc kubenswrapper[4832]: I1002 18:39:05.960809 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5c468bf4d4-nb6x7" event={"ID":"6049f0ba-16e6-4773-bc16-d26b6e04364e","Type":"ContainerStarted","Data":"c39ed1b9486ab0af42722e548a8bdde71b643b849e3be53ee3350d2f3ad501ad"} Oct 02 18:39:05 crc kubenswrapper[4832]: I1002 18:39:05.970489 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79d68d6c85-98l5q" event={"ID":"3a910552-db07-45ab-9f11-5b5051a1d070","Type":"ContainerStarted","Data":"d49c9caa8039babb4cae54ad40415b1572fc477a0caddc8101bf3581e4f2b203"} Oct 02 18:39:05 crc kubenswrapper[4832]: I1002 18:39:05.971221 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79d68d6c85-98l5q" Oct 02 18:39:05 crc kubenswrapper[4832]: I1002 18:39:05.985636 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz" event={"ID":"655a4d07-4b1f-420e-b676-8e5094960f64","Type":"ContainerStarted","Data":"f19478f0441afe321911310c5c551180905ba3e32d3fdb0f5fe9219991525f6b"} Oct 02 18:39:05 crc kubenswrapper[4832]: E1002 18:39:05.988908 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:b7409dcf05c85eab205904d29d4276f8e927c772eba6363ecfa21ab10c4aaa01\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz" podUID="655a4d07-4b1f-420e-b676-8e5094960f64" Oct 02 18:39:05 crc kubenswrapper[4832]: I1002 18:39:05.993897 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-sqc96" event={"ID":"d06661d7-5a41-4954-bfe4-8d25a9aa49d1","Type":"ContainerStarted","Data":"5d46304e3130f81ac27e5d64c1121f92de5418e55ab04ea816953f51e5fcabd0"} Oct 02 18:39:06 crc kubenswrapper[4832]: I1002 18:39:06.002459 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6fd6854b49-ts5sn" event={"ID":"5ae05766-702f-4f1d-a149-a01663fd2b53","Type":"ContainerStarted","Data":"12d9cf21d13a25ae4ac54370d3e0fc5e9460b44ac5b5b5ce92205c47c4769c95"} Oct 02 18:39:06 crc kubenswrapper[4832]: I1002 18:39:06.011999 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" event={"ID":"47968f19-fabf-423c-9cf6-1d8b57654e3f","Type":"ContainerStarted","Data":"39dc5dfa7151bcad3d221e63332afc15c899246728410e9475908479c584c435"} Oct 02 18:39:06 crc kubenswrapper[4832]: I1002 18:39:06.012754 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" Oct 02 18:39:06 crc kubenswrapper[4832]: I1002 18:39:06.024459 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6574bf987d-79nms" event={"ID":"36d178a5-f367-4534-ab1e-54c162ce2961","Type":"ContainerStarted","Data":"44d19d0a1a228eadfc38494f4d2e2f6dbb5b07227bdcfda7dce886183c3f5fe4"} Oct 02 18:39:06 crc kubenswrapper[4832]: I1002 18:39:06.049389 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-599898f689-rnsmf" event={"ID":"7a36740f-eefd-4d9d-afe2-491d02a75fa6","Type":"ContainerStarted","Data":"36995251b91d172e89598fe94ef28444874a300e5e26756634652524a95396e8"} Oct 02 18:39:06 crc kubenswrapper[4832]: I1002 18:39:06.062535 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6c675fb79f-szf5c" event={"ID":"2cca84e4-3eb8-41c8-95db-f5b755e83758","Type":"ContainerStarted","Data":"4b12fb3678f4bf468a5ca029ac18387bef8365140b0acfcdbf78212f2a24f597"} Oct 02 18:39:06 crc kubenswrapper[4832]: I1002 18:39:06.074583 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9" event={"ID":"e8179b13-12b7-492d-bc86-f5543cfcbfbb","Type":"ContainerStarted","Data":"2626bd0f6db6d35110d8144b264ff7b5c367cb1cfc8faf0c80a00900fa780d71"} Oct 02 18:39:06 crc kubenswrapper[4832]: E1002 18:39:06.078526 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:725da67b3f9cf2758564e0111928cdd570c0f6f1ca34775f159bbe94deb82548\\\"\"" pod="openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9" podUID="e8179b13-12b7-492d-bc86-f5543cfcbfbb" Oct 02 18:39:06 crc kubenswrapper[4832]: I1002 18:39:06.082225 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-59mvq" podStartSLOduration=3.497507485 podStartE2EDuration="27.082200015s" podCreationTimestamp="2025-10-02 18:38:39 +0000 UTC" firstStartedPulling="2025-10-02 18:38:41.031492079 +0000 UTC m=+1078.000934951" lastFinishedPulling="2025-10-02 18:39:04.616184619 +0000 UTC m=+1101.585627481" observedRunningTime="2025-10-02 18:39:06.035743398 +0000 UTC m=+1103.005186270" watchObservedRunningTime="2025-10-02 18:39:06.082200015 +0000 UTC m=+1103.051642887" Oct 02 18:39:06 crc kubenswrapper[4832]: I1002 18:39:06.089411 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-84bc9db6cc-xf6p9" event={"ID":"93650652-02f0-403d-a9e6-6a71feb797c6","Type":"ContainerStarted","Data":"ecae9f53c020e71c8bc08c7cab886b712986f9d27bde4af43d48f4f3338534fd"} Oct 02 18:39:06 crc kubenswrapper[4832]: I1002 18:39:06.114164 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6769b867d9-zsnms" event={"ID":"b66dc2a1-b115-4952-99ce-866046ca9ea5","Type":"ContainerStarted","Data":"9a2601d8290c6125a8e8128e7ab3502833d4409083b45b14a54579d516c0ae22"} Oct 02 18:39:06 crc kubenswrapper[4832]: I1002 18:39:06.119850 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79d68d6c85-98l5q" podStartSLOduration=5.056810874 podStartE2EDuration="28.119823237s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:39.833220788 +0000 UTC m=+1076.802663660" lastFinishedPulling="2025-10-02 18:39:02.896233151 +0000 UTC m=+1099.865676023" observedRunningTime="2025-10-02 18:39:06.11385907 +0000 UTC m=+1103.083301942" watchObservedRunningTime="2025-10-02 18:39:06.119823237 +0000 UTC m=+1103.089266109" Oct 02 18:39:06 crc kubenswrapper[4832]: I1002 18:39:06.143780 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7f55849f88-fv5h5" event={"ID":"b2e208d4-436d-4e17-b4b3-165b130164c7","Type":"ContainerStarted","Data":"60810473359d8e32053caa74710a9497f37327bc26ed813b3909341568b195f7"} Oct 02 18:39:06 crc kubenswrapper[4832]: I1002 18:39:06.159788 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4" event={"ID":"13334024-dee1-47bd-aebe-22df02b93ea0","Type":"ContainerStarted","Data":"8cd0bcffcf47ae596a4160cfe7681a7d7a0a217c2f9eb6ff26aaf6fda6db2ea2"} Oct 02 18:39:06 crc kubenswrapper[4832]: E1002 18:39:06.162433 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:637bb7b9ac308bc1e323391a3593b824f688090a856c83385814c17a571b1eed\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-97td6" podUID="2ac2d023-64bc-4653-a8eb-2dd5ed49313c" Oct 02 18:39:06 crc kubenswrapper[4832]: E1002 18:39:06.162798 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5c6ab93b78bd20eb7f1736751a59c1eb33fb06351339563dbefe49ccaaff6e94\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql" podUID="8b694594-41bd-4e62-a202-951f85430ff6" Oct 02 18:39:06 crc kubenswrapper[4832]: E1002 18:39:06.162859 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:40fb1819b6639807b77ef79448d35f1e4bfc1838a09d4f380e9fa0f755352475\\\"\"" pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" podUID="0835997a-eef2-4744-a6ed-dce8714f62f7" Oct 02 18:39:06 crc kubenswrapper[4832]: E1002 18:39:06.163334 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.39:5001/openstack-k8s-operators/telemetry-operator:39199b16fb4ef65d07a91339e59f624612b96660\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb" podUID="e6f36bc2-bb15-47f7-9881-05f35c2c513c" Oct 02 18:39:06 crc kubenswrapper[4832]: I1002 18:39:06.269356 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" podStartSLOduration=28.269337262 podStartE2EDuration="28.269337262s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:39:06.246738422 +0000 UTC m=+1103.216181294" watchObservedRunningTime="2025-10-02 18:39:06.269337262 +0000 UTC m=+1103.238780134" Oct 02 18:39:07 crc kubenswrapper[4832]: E1002 18:39:07.170426 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:b7409dcf05c85eab205904d29d4276f8e927c772eba6363ecfa21ab10c4aaa01\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz" podUID="655a4d07-4b1f-420e-b676-8e5094960f64" Oct 02 18:39:07 crc kubenswrapper[4832]: E1002 18:39:07.171092 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:725da67b3f9cf2758564e0111928cdd570c0f6f1ca34775f159bbe94deb82548\\\"\"" pod="openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9" podUID="e8179b13-12b7-492d-bc86-f5543cfcbfbb" Oct 02 18:39:07 crc kubenswrapper[4832]: E1002 18:39:07.171155 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:a82409e6d6a5554aad95acfe6fa4784e33de19a963eb8b1da1a80a3e6cf1ab55\\\"\"" pod="openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6" podUID="60b0fee3-0856-4087-ad87-0a4847e3613c" Oct 02 18:39:08 crc kubenswrapper[4832]: I1002 18:39:08.182400 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79d68d6c85-98l5q" Oct 02 18:39:08 crc kubenswrapper[4832]: I1002 18:39:08.189484 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7bffff79d9-sj2rb" Oct 02 18:39:09 crc kubenswrapper[4832]: I1002 18:39:09.216190 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4" podStartSLOduration=8.589365231 podStartE2EDuration="31.216169178s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:41.022433165 +0000 UTC m=+1077.991876037" lastFinishedPulling="2025-10-02 18:39:03.649237112 +0000 UTC m=+1100.618679984" observedRunningTime="2025-10-02 18:39:09.211540892 +0000 UTC m=+1106.180983764" watchObservedRunningTime="2025-10-02 18:39:09.216169178 +0000 UTC m=+1106.185612050" Oct 02 18:39:09 crc kubenswrapper[4832]: I1002 18:39:09.642522 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.210550 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-846dff85b5-9pnvm" event={"ID":"0fa21051-6127-497b-a7dc-f4156314397e","Type":"ContainerStarted","Data":"5a22ad044dc7caddb2605d822e6f3530d9d102f48d3c4f2fa1214a1ff9873700"} Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.211733 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-846dff85b5-9pnvm" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.212778 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-846dff85b5-9pnvm" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.214610 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-599898f689-rnsmf" event={"ID":"7a36740f-eefd-4d9d-afe2-491d02a75fa6","Type":"ContainerStarted","Data":"f028c685506f19211cbf8f3d28f08aab145abddc5c3793e2f9bb03bd9f773f2f"} Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.215310 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-599898f689-rnsmf" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.217322 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-599898f689-rnsmf" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.217489 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-84bc9db6cc-xf6p9" event={"ID":"93650652-02f0-403d-a9e6-6a71feb797c6","Type":"ContainerStarted","Data":"2190882aeb12a63172ed2aa73f637063575a623b9e3f32a25c0d7ddf0f00cbb0"} Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.217674 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-84bc9db6cc-xf6p9" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.219712 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6fd6854b49-ts5sn" event={"ID":"5ae05766-702f-4f1d-a149-a01663fd2b53","Type":"ContainerStarted","Data":"e2930e994516d4e46af2abbc42fe0bdb73d9bcc4b4f8ac6a8be62bc2c9c79f66"} Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.219883 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6fd6854b49-ts5sn" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.220754 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-84bc9db6cc-xf6p9" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.221669 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-sqc96" event={"ID":"d06661d7-5a41-4954-bfe4-8d25a9aa49d1","Type":"ContainerStarted","Data":"401098adb91c0bf592e4a37fd983fd9a786f0a1b7639602bab56b0212c2445c1"} Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.222143 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-sqc96" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.222867 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6fd6854b49-ts5sn" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.227209 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-sqc96" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.231133 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-846dff85b5-9pnvm" podStartSLOduration=9.163498196 podStartE2EDuration="32.231114553s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:39.82850703 +0000 UTC m=+1076.797949902" lastFinishedPulling="2025-10-02 18:39:02.896123387 +0000 UTC m=+1099.865566259" observedRunningTime="2025-10-02 18:39:10.22719542 +0000 UTC m=+1107.196638312" watchObservedRunningTime="2025-10-02 18:39:10.231114553 +0000 UTC m=+1107.200557425" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.231978 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6574bf987d-79nms" event={"ID":"36d178a5-f367-4534-ab1e-54c162ce2961","Type":"ContainerStarted","Data":"7c07406df04005a96bd87baeef16a0c2903c74a9caf4a841fd7a5b2ee17ca5c1"} Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.232778 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6574bf987d-79nms" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.235150 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6574bf987d-79nms" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.236755 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6c675fb79f-szf5c" event={"ID":"2cca84e4-3eb8-41c8-95db-f5b755e83758","Type":"ContainerStarted","Data":"a9d0d3a9df379ab538a0746a10c9e561cad8cf04c767648d2d48264592c97769"} Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.237671 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6c675fb79f-szf5c" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.239101 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6c675fb79f-szf5c" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.240156 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5c468bf4d4-nb6x7" event={"ID":"6049f0ba-16e6-4773-bc16-d26b6e04364e","Type":"ContainerStarted","Data":"3d43eec8e1d86b20f241af89c31a182090e8b48ed17d0469decfc36120ae935d"} Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.241313 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-5c468bf4d4-nb6x7" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.246510 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-5c468bf4d4-nb6x7" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.249018 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-599898f689-rnsmf" podStartSLOduration=9.202787019 podStartE2EDuration="32.249007104s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:39.850011716 +0000 UTC m=+1076.819454588" lastFinishedPulling="2025-10-02 18:39:02.896231781 +0000 UTC m=+1099.865674673" observedRunningTime="2025-10-02 18:39:10.248697085 +0000 UTC m=+1107.218139967" watchObservedRunningTime="2025-10-02 18:39:10.249007104 +0000 UTC m=+1107.218449966" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.257316 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6769b867d9-zsnms" event={"ID":"b66dc2a1-b115-4952-99ce-866046ca9ea5","Type":"ContainerStarted","Data":"dcfde9b9a2a35b474541133a7c7920654fd7e427e355c733cc94355ab3072c47"} Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.258334 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6769b867d9-zsnms" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.267212 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6769b867d9-zsnms" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.267371 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7f55849f88-fv5h5" event={"ID":"b2e208d4-436d-4e17-b4b3-165b130164c7","Type":"ContainerStarted","Data":"8d7c07eb05c99157f1338123586b60644a3544a9101b7262cde494016c72db9f"} Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.268381 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6fd6854b49-ts5sn" podStartSLOduration=9.716954383000001 podStartE2EDuration="32.268363352s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:40.368299138 +0000 UTC m=+1077.337742000" lastFinishedPulling="2025-10-02 18:39:02.919708097 +0000 UTC m=+1099.889150969" observedRunningTime="2025-10-02 18:39:10.263050015 +0000 UTC m=+1107.232492887" watchObservedRunningTime="2025-10-02 18:39:10.268363352 +0000 UTC m=+1107.237806224" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.268496 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7f55849f88-fv5h5" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.270970 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd5cb47d7-frks4" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.274873 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7f55849f88-fv5h5" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.331304 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-84bc9db6cc-xf6p9" podStartSLOduration=9.062095083 podStartE2EDuration="32.331287828s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:40.26104215 +0000 UTC m=+1077.230485012" lastFinishedPulling="2025-10-02 18:39:03.530234885 +0000 UTC m=+1100.499677757" observedRunningTime="2025-10-02 18:39:10.319093505 +0000 UTC m=+1107.288536377" watchObservedRunningTime="2025-10-02 18:39:10.331287828 +0000 UTC m=+1107.300730700" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.343706 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-75dfd9b554-sqc96" podStartSLOduration=8.719600411 podStartE2EDuration="32.343690067s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:39.272061251 +0000 UTC m=+1076.241504123" lastFinishedPulling="2025-10-02 18:39:02.896150907 +0000 UTC m=+1099.865593779" observedRunningTime="2025-10-02 18:39:10.341511818 +0000 UTC m=+1107.310954690" watchObservedRunningTime="2025-10-02 18:39:10.343690067 +0000 UTC m=+1107.313132939" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.365952 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6769b867d9-zsnms" podStartSLOduration=8.687299495 podStartE2EDuration="32.365935865s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:39.849988975 +0000 UTC m=+1076.819431847" lastFinishedPulling="2025-10-02 18:39:03.528625345 +0000 UTC m=+1100.498068217" observedRunningTime="2025-10-02 18:39:10.36003669 +0000 UTC m=+1107.329479562" watchObservedRunningTime="2025-10-02 18:39:10.365935865 +0000 UTC m=+1107.335378737" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.395650 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-5c468bf4d4-nb6x7" podStartSLOduration=9.761928233999999 podStartE2EDuration="32.395630897s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:40.263515718 +0000 UTC m=+1077.232958590" lastFinishedPulling="2025-10-02 18:39:02.897218381 +0000 UTC m=+1099.866661253" observedRunningTime="2025-10-02 18:39:10.380129421 +0000 UTC m=+1107.349572293" watchObservedRunningTime="2025-10-02 18:39:10.395630897 +0000 UTC m=+1107.365073770" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.409971 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6c675fb79f-szf5c" podStartSLOduration=8.747244338 podStartE2EDuration="32.409952457s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:39.265162884 +0000 UTC m=+1076.234605756" lastFinishedPulling="2025-10-02 18:39:02.927871003 +0000 UTC m=+1099.897313875" observedRunningTime="2025-10-02 18:39:10.40461962 +0000 UTC m=+1107.374062492" watchObservedRunningTime="2025-10-02 18:39:10.409952457 +0000 UTC m=+1107.379395329" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.442188 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7f55849f88-fv5h5" podStartSLOduration=9.874386274999999 podStartE2EDuration="32.442174259s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:40.361400491 +0000 UTC m=+1077.330843363" lastFinishedPulling="2025-10-02 18:39:02.929188475 +0000 UTC m=+1099.898631347" observedRunningTime="2025-10-02 18:39:10.43997553 +0000 UTC m=+1107.409418402" watchObservedRunningTime="2025-10-02 18:39:10.442174259 +0000 UTC m=+1107.411617121" Oct 02 18:39:10 crc kubenswrapper[4832]: I1002 18:39:10.465208 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6574bf987d-79nms" podStartSLOduration=9.312903248 podStartE2EDuration="32.465195172s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:40.373663946 +0000 UTC m=+1077.343106818" lastFinishedPulling="2025-10-02 18:39:03.52595587 +0000 UTC m=+1100.495398742" observedRunningTime="2025-10-02 18:39:10.463350823 +0000 UTC m=+1107.432793695" watchObservedRunningTime="2025-10-02 18:39:10.465195172 +0000 UTC m=+1107.434638044" Oct 02 18:39:13 crc kubenswrapper[4832]: I1002 18:39:13.297068 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx" event={"ID":"45bbf7cb-04fb-4076-af85-0cecd610a929","Type":"ContainerStarted","Data":"40ac87eb5ed7ea49a5f9a3ec75d78e9c0e4e6a147c4728ada5e3672155d7b69e"} Oct 02 18:39:13 crc kubenswrapper[4832]: I1002 18:39:13.298351 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx" Oct 02 18:39:13 crc kubenswrapper[4832]: I1002 18:39:13.299435 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" event={"ID":"30502d18-201c-4133-b25a-7b1e96ce21cf","Type":"ContainerStarted","Data":"92b3b5849fc7af45ae77325c6c0ffac51c8ee09e430a17cb601dd4dbb652e3be"} Oct 02 18:39:13 crc kubenswrapper[4832]: I1002 18:39:13.299629 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" Oct 02 18:39:13 crc kubenswrapper[4832]: I1002 18:39:13.319787 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx" podStartSLOduration=3.843570965 podStartE2EDuration="35.319768144s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:41.17040027 +0000 UTC m=+1078.139843142" lastFinishedPulling="2025-10-02 18:39:12.646597429 +0000 UTC m=+1109.616040321" observedRunningTime="2025-10-02 18:39:13.315904973 +0000 UTC m=+1110.285347845" watchObservedRunningTime="2025-10-02 18:39:13.319768144 +0000 UTC m=+1110.289211026" Oct 02 18:39:18 crc kubenswrapper[4832]: I1002 18:39:18.366552 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" event={"ID":"0835997a-eef2-4744-a6ed-dce8714f62f7","Type":"ContainerStarted","Data":"d1d589a4a9f825d4af095d632c12070cd098648b14d242fa8f9a1db6bda6da47"} Oct 02 18:39:18 crc kubenswrapper[4832]: I1002 18:39:18.367345 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" Oct 02 18:39:18 crc kubenswrapper[4832]: I1002 18:39:18.393166 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" podStartSLOduration=8.784123418 podStartE2EDuration="40.393144507s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:41.029544338 +0000 UTC m=+1077.998987220" lastFinishedPulling="2025-10-02 18:39:12.638565397 +0000 UTC m=+1109.608008309" observedRunningTime="2025-10-02 18:39:13.34800945 +0000 UTC m=+1110.317452362" watchObservedRunningTime="2025-10-02 18:39:18.393144507 +0000 UTC m=+1115.362587399" Oct 02 18:39:18 crc kubenswrapper[4832]: I1002 18:39:18.396961 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" podStartSLOduration=3.756912904 podStartE2EDuration="40.396945175s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:41.029524238 +0000 UTC m=+1077.998967110" lastFinishedPulling="2025-10-02 18:39:17.669556499 +0000 UTC m=+1114.638999381" observedRunningTime="2025-10-02 18:39:18.386170808 +0000 UTC m=+1115.355613690" watchObservedRunningTime="2025-10-02 18:39:18.396945175 +0000 UTC m=+1115.366388057" Oct 02 18:39:19 crc kubenswrapper[4832]: I1002 18:39:19.377873 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql" event={"ID":"8b694594-41bd-4e62-a202-951f85430ff6","Type":"ContainerStarted","Data":"31e05eea5529c5dbe6153c9a934b40fab87a5e03b3eccd6462f7b081a2ed776d"} Oct 02 18:39:19 crc kubenswrapper[4832]: I1002 18:39:19.379309 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql" Oct 02 18:39:19 crc kubenswrapper[4832]: I1002 18:39:19.385061 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz" event={"ID":"655a4d07-4b1f-420e-b676-8e5094960f64","Type":"ContainerStarted","Data":"e13e77a94d231c549f7478c55fc8919bfffb5a50da6156b2af3c965592315cc1"} Oct 02 18:39:19 crc kubenswrapper[4832]: I1002 18:39:19.385615 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz" Oct 02 18:39:19 crc kubenswrapper[4832]: I1002 18:39:19.389803 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-97td6" event={"ID":"2ac2d023-64bc-4653-a8eb-2dd5ed49313c","Type":"ContainerStarted","Data":"d4a0d2e876688d5210f8e40e56489e75638751b90dfcbc3ced86ad3708edf094"} Oct 02 18:39:19 crc kubenswrapper[4832]: I1002 18:39:19.390663 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-97td6" Oct 02 18:39:19 crc kubenswrapper[4832]: I1002 18:39:19.402375 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql" podStartSLOduration=3.5132014639999998 podStartE2EDuration="41.402357192s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:40.978498876 +0000 UTC m=+1077.947941748" lastFinishedPulling="2025-10-02 18:39:18.867654614 +0000 UTC m=+1115.837097476" observedRunningTime="2025-10-02 18:39:19.396987213 +0000 UTC m=+1116.366430095" watchObservedRunningTime="2025-10-02 18:39:19.402357192 +0000 UTC m=+1116.371800054" Oct 02 18:39:19 crc kubenswrapper[4832]: I1002 18:39:19.421212 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-97td6" podStartSLOduration=3.727186511 podStartE2EDuration="41.421190143s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:40.977946308 +0000 UTC m=+1077.947389180" lastFinishedPulling="2025-10-02 18:39:18.6719499 +0000 UTC m=+1115.641392812" observedRunningTime="2025-10-02 18:39:19.413428959 +0000 UTC m=+1116.382871831" watchObservedRunningTime="2025-10-02 18:39:19.421190143 +0000 UTC m=+1116.390633015" Oct 02 18:39:19 crc kubenswrapper[4832]: I1002 18:39:19.442810 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz" podStartSLOduration=3.531342632 podStartE2EDuration="41.442792061s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:40.947735869 +0000 UTC m=+1077.917178741" lastFinishedPulling="2025-10-02 18:39:18.859185298 +0000 UTC m=+1115.828628170" observedRunningTime="2025-10-02 18:39:19.432756036 +0000 UTC m=+1116.402198908" watchObservedRunningTime="2025-10-02 18:39:19.442792061 +0000 UTC m=+1116.412234933" Oct 02 18:39:19 crc kubenswrapper[4832]: I1002 18:39:19.444221 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr" Oct 02 18:39:20 crc kubenswrapper[4832]: I1002 18:39:20.022088 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-fcd7d9895-plmxx" Oct 02 18:39:20 crc kubenswrapper[4832]: I1002 18:39:20.397964 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9" event={"ID":"e8179b13-12b7-492d-bc86-f5543cfcbfbb","Type":"ContainerStarted","Data":"f34827fae316fb2a1ed766f9ec7a7fceef5463c9c7eb77c7e78bcb26953aac77"} Oct 02 18:39:20 crc kubenswrapper[4832]: I1002 18:39:20.416474 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9" podStartSLOduration=3.651555286 podStartE2EDuration="42.41645607s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:40.938963154 +0000 UTC m=+1077.908406026" lastFinishedPulling="2025-10-02 18:39:19.703863928 +0000 UTC m=+1116.673306810" observedRunningTime="2025-10-02 18:39:20.411175454 +0000 UTC m=+1117.380618366" watchObservedRunningTime="2025-10-02 18:39:20.41645607 +0000 UTC m=+1117.385898932" Oct 02 18:39:21 crc kubenswrapper[4832]: I1002 18:39:21.419635 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb" event={"ID":"e6f36bc2-bb15-47f7-9881-05f35c2c513c","Type":"ContainerStarted","Data":"81cca30a9efa9fca11bbbf9c07317cf53e7f897add99a6aee7f792af8c9e4736"} Oct 02 18:39:21 crc kubenswrapper[4832]: I1002 18:39:21.420639 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb" Oct 02 18:39:21 crc kubenswrapper[4832]: I1002 18:39:21.447521 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb" podStartSLOduration=4.119703553 podStartE2EDuration="43.44750179s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:40.97833235 +0000 UTC m=+1077.947775222" lastFinishedPulling="2025-10-02 18:39:20.306130587 +0000 UTC m=+1117.275573459" observedRunningTime="2025-10-02 18:39:21.441965717 +0000 UTC m=+1118.411408599" watchObservedRunningTime="2025-10-02 18:39:21.44750179 +0000 UTC m=+1118.416944662" Oct 02 18:39:23 crc kubenswrapper[4832]: I1002 18:39:23.448640 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6" event={"ID":"60b0fee3-0856-4087-ad87-0a4847e3613c","Type":"ContainerStarted","Data":"623bd514ef9387fe703d32bc903ee20b105ba1712086d588b3b4a418abcd75fe"} Oct 02 18:39:23 crc kubenswrapper[4832]: I1002 18:39:23.451414 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6" Oct 02 18:39:23 crc kubenswrapper[4832]: I1002 18:39:23.490489 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6" podStartSLOduration=3.6336689460000002 podStartE2EDuration="45.490454071s" podCreationTimestamp="2025-10-02 18:38:38 +0000 UTC" firstStartedPulling="2025-10-02 18:38:40.966901412 +0000 UTC m=+1077.936344284" lastFinishedPulling="2025-10-02 18:39:22.823686507 +0000 UTC m=+1119.793129409" observedRunningTime="2025-10-02 18:39:23.481955694 +0000 UTC m=+1120.451398596" watchObservedRunningTime="2025-10-02 18:39:23.490454071 +0000 UTC m=+1120.459896953" Oct 02 18:39:29 crc kubenswrapper[4832]: I1002 18:39:29.296497 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-555c7456bd-v8gc6" Oct 02 18:39:29 crc kubenswrapper[4832]: I1002 18:39:29.408250 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-59d6cfdf45-tsswz" Oct 02 18:39:29 crc kubenswrapper[4832]: I1002 18:39:29.485975 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-688db7b6c7-p9wql" Oct 02 18:39:29 crc kubenswrapper[4832]: I1002 18:39:29.512988 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9" Oct 02 18:39:29 crc kubenswrapper[4832]: I1002 18:39:29.527039 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-7d8bb7f44c-vqrd9" Oct 02 18:39:29 crc kubenswrapper[4832]: I1002 18:39:29.558137 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6859f9b676-97td6" Oct 02 18:39:29 crc kubenswrapper[4832]: I1002 18:39:29.563691 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5fbf469cd7-262c9" Oct 02 18:39:29 crc kubenswrapper[4832]: I1002 18:39:29.632756 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-769bf6645d-wj4tb" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.040662 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwwxb"] Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.045042 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bwwxb" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.048856 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.049071 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.049174 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.049233 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-q58r4" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.051919 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwwxb"] Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.119431 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ms6mp"] Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.122394 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.123309 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f2a1028-527d-4a14-b015-d74c5b9ec1df-config\") pod \"dnsmasq-dns-675f4bcbfc-bwwxb\" (UID: \"3f2a1028-527d-4a14-b015-d74c5b9ec1df\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwwxb" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.123443 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcr25\" (UniqueName: \"kubernetes.io/projected/3f2a1028-527d-4a14-b015-d74c5b9ec1df-kube-api-access-rcr25\") pod \"dnsmasq-dns-675f4bcbfc-bwwxb\" (UID: \"3f2a1028-527d-4a14-b015-d74c5b9ec1df\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwwxb" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.127853 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.128510 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ms6mp"] Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.224386 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ms6mp\" (UID: \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.224657 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcr25\" (UniqueName: \"kubernetes.io/projected/3f2a1028-527d-4a14-b015-d74c5b9ec1df-kube-api-access-rcr25\") pod \"dnsmasq-dns-675f4bcbfc-bwwxb\" (UID: \"3f2a1028-527d-4a14-b015-d74c5b9ec1df\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwwxb" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.224785 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f2a1028-527d-4a14-b015-d74c5b9ec1df-config\") pod \"dnsmasq-dns-675f4bcbfc-bwwxb\" (UID: \"3f2a1028-527d-4a14-b015-d74c5b9ec1df\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwwxb" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.224936 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-config\") pod \"dnsmasq-dns-78dd6ddcc-ms6mp\" (UID: \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.225038 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c7vt\" (UniqueName: \"kubernetes.io/projected/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-kube-api-access-2c7vt\") pod \"dnsmasq-dns-78dd6ddcc-ms6mp\" (UID: \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.226026 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f2a1028-527d-4a14-b015-d74c5b9ec1df-config\") pod \"dnsmasq-dns-675f4bcbfc-bwwxb\" (UID: \"3f2a1028-527d-4a14-b015-d74c5b9ec1df\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwwxb" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.257453 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcr25\" (UniqueName: \"kubernetes.io/projected/3f2a1028-527d-4a14-b015-d74c5b9ec1df-kube-api-access-rcr25\") pod \"dnsmasq-dns-675f4bcbfc-bwwxb\" (UID: \"3f2a1028-527d-4a14-b015-d74c5b9ec1df\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bwwxb" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.327127 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-config\") pod \"dnsmasq-dns-78dd6ddcc-ms6mp\" (UID: \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.327227 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c7vt\" (UniqueName: \"kubernetes.io/projected/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-kube-api-access-2c7vt\") pod \"dnsmasq-dns-78dd6ddcc-ms6mp\" (UID: \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.327333 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ms6mp\" (UID: \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.328515 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-config\") pod \"dnsmasq-dns-78dd6ddcc-ms6mp\" (UID: \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.328595 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ms6mp\" (UID: \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.351116 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c7vt\" (UniqueName: \"kubernetes.io/projected/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-kube-api-access-2c7vt\") pod \"dnsmasq-dns-78dd6ddcc-ms6mp\" (UID: \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.378315 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bwwxb" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.442769 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.829522 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwwxb"] Oct 02 18:39:47 crc kubenswrapper[4832]: I1002 18:39:47.948567 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ms6mp"] Oct 02 18:39:48 crc kubenswrapper[4832]: I1002 18:39:48.736607 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bwwxb" event={"ID":"3f2a1028-527d-4a14-b015-d74c5b9ec1df","Type":"ContainerStarted","Data":"e4286f6369c52920add63533e4832d590e8902e95b4ee12dd30d8c27a38efac7"} Oct 02 18:39:48 crc kubenswrapper[4832]: I1002 18:39:48.738533 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" event={"ID":"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa","Type":"ContainerStarted","Data":"7d473cb3885077b4a081b1222b93793e1c7835b32e90efae69eed7bf55951a51"} Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.266465 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwwxb"] Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.288827 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-pzwvs"] Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.290977 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.305543 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-pzwvs"] Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.402055 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-config\") pod \"dnsmasq-dns-5ccc8479f9-pzwvs\" (UID: \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.402132 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-pzwvs\" (UID: \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.402184 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8crj\" (UniqueName: \"kubernetes.io/projected/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-kube-api-access-v8crj\") pod \"dnsmasq-dns-5ccc8479f9-pzwvs\" (UID: \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.505105 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-pzwvs\" (UID: \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.505204 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8crj\" (UniqueName: \"kubernetes.io/projected/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-kube-api-access-v8crj\") pod \"dnsmasq-dns-5ccc8479f9-pzwvs\" (UID: \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.505523 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-config\") pod \"dnsmasq-dns-5ccc8479f9-pzwvs\" (UID: \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.506052 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-pzwvs\" (UID: \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.506700 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-config\") pod \"dnsmasq-dns-5ccc8479f9-pzwvs\" (UID: \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.523494 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8crj\" (UniqueName: \"kubernetes.io/projected/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-kube-api-access-v8crj\") pod \"dnsmasq-dns-5ccc8479f9-pzwvs\" (UID: \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\") " pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.556799 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ms6mp"] Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.579823 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jlvv4"] Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.596619 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.619810 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.621846 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jlvv4"] Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.711070 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dttlj\" (UniqueName: \"kubernetes.io/projected/271aa011-a53f-4340-be88-34fb8b95a78b-kube-api-access-dttlj\") pod \"dnsmasq-dns-57d769cc4f-jlvv4\" (UID: \"271aa011-a53f-4340-be88-34fb8b95a78b\") " pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.711159 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/271aa011-a53f-4340-be88-34fb8b95a78b-config\") pod \"dnsmasq-dns-57d769cc4f-jlvv4\" (UID: \"271aa011-a53f-4340-be88-34fb8b95a78b\") " pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.711356 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/271aa011-a53f-4340-be88-34fb8b95a78b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jlvv4\" (UID: \"271aa011-a53f-4340-be88-34fb8b95a78b\") " pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.813367 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/271aa011-a53f-4340-be88-34fb8b95a78b-config\") pod \"dnsmasq-dns-57d769cc4f-jlvv4\" (UID: \"271aa011-a53f-4340-be88-34fb8b95a78b\") " pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.813848 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/271aa011-a53f-4340-be88-34fb8b95a78b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jlvv4\" (UID: \"271aa011-a53f-4340-be88-34fb8b95a78b\") " pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.813945 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dttlj\" (UniqueName: \"kubernetes.io/projected/271aa011-a53f-4340-be88-34fb8b95a78b-kube-api-access-dttlj\") pod \"dnsmasq-dns-57d769cc4f-jlvv4\" (UID: \"271aa011-a53f-4340-be88-34fb8b95a78b\") " pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.814534 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/271aa011-a53f-4340-be88-34fb8b95a78b-config\") pod \"dnsmasq-dns-57d769cc4f-jlvv4\" (UID: \"271aa011-a53f-4340-be88-34fb8b95a78b\") " pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.815188 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/271aa011-a53f-4340-be88-34fb8b95a78b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jlvv4\" (UID: \"271aa011-a53f-4340-be88-34fb8b95a78b\") " pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.845039 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dttlj\" (UniqueName: \"kubernetes.io/projected/271aa011-a53f-4340-be88-34fb8b95a78b-kube-api-access-dttlj\") pod \"dnsmasq-dns-57d769cc4f-jlvv4\" (UID: \"271aa011-a53f-4340-be88-34fb8b95a78b\") " pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:39:50 crc kubenswrapper[4832]: I1002 18:39:50.933074 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.399991 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-pzwvs"] Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.421710 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.423440 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.423511 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.453132 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rn6lz" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.453319 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.453475 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.453584 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.453697 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.453835 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.454298 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.507114 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jlvv4"] Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.527119 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.527157 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.527185 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfv4z\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-kube-api-access-nfv4z\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.527208 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ff074fc-c56e-40f3-a327-b829d84c9866-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.527270 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.527353 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.527384 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.527410 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ff074fc-c56e-40f3-a327-b829d84c9866-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.527428 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.527444 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.527460 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.628945 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.629007 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.629042 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ff074fc-c56e-40f3-a327-b829d84c9866-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.629062 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.629083 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.629102 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.629130 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.629153 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.629178 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfv4z\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-kube-api-access-nfv4z\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.629203 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ff074fc-c56e-40f3-a327-b829d84c9866-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.629255 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.629633 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.630035 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.630162 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.630719 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.631126 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.631734 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.635045 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ff074fc-c56e-40f3-a327-b829d84c9866-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.635235 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.636012 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.643957 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ff074fc-c56e-40f3-a327-b829d84c9866-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.652617 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.654694 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfv4z\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-kube-api-access-nfv4z\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.685902 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.687443 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.690147 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.690168 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4vkmm" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.690372 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.690476 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.690574 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.690723 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.692413 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.694012 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.731361 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.731413 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd2lw\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-kube-api-access-bd2lw\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.731450 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.731470 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.731698 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.731843 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.731885 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.731973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.732017 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.732151 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-config-data\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.732195 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.776738 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.834351 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.834400 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.834449 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-config-data\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.834473 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.834533 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.834553 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd2lw\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-kube-api-access-bd2lw\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.834579 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.834594 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.834619 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.834654 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.834670 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.834782 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.834972 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.835064 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.836800 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-config-data\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.837360 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.837928 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.850563 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.850597 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.850882 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.850889 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.854692 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd2lw\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-kube-api-access-bd2lw\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:51 crc kubenswrapper[4832]: I1002 18:39:51.868435 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " pod="openstack/rabbitmq-server-0" Oct 02 18:39:52 crc kubenswrapper[4832]: I1002 18:39:52.020726 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.034347 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.036391 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.038890 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9n9sq" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.039650 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.040479 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.042486 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.049058 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.054045 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.059913 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 02 18:39:54 crc kubenswrapper[4832]: W1002 18:39:54.077478 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod271aa011_a53f_4340_be88_34fb8b95a78b.slice/crio-ceb74c241c99605eb909b2db32f010b11e749e19dc74f2beeea4f40f417fffeb WatchSource:0}: Error finding container ceb74c241c99605eb909b2db32f010b11e749e19dc74f2beeea4f40f417fffeb: Status 404 returned error can't find the container with id ceb74c241c99605eb909b2db32f010b11e749e19dc74f2beeea4f40f417fffeb Oct 02 18:39:54 crc kubenswrapper[4832]: W1002 18:39:54.092493 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cc9d6ca_a3c7_40fa_b0a6_c78a55fdf9c1.slice/crio-7d8592e5c721770e60db7caf8c7082ecb7c711453e180c186b1130df63ba6266 WatchSource:0}: Error finding container 7d8592e5c721770e60db7caf8c7082ecb7c711453e180c186b1130df63ba6266: Status 404 returned error can't find the container with id 7d8592e5c721770e60db7caf8c7082ecb7c711453e180c186b1130df63ba6266 Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.184070 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2jtc\" (UniqueName: \"kubernetes.io/projected/d6c6d1dc-36df-4b33-8d10-dde52bd65630-kube-api-access-g2jtc\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.184135 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6c6d1dc-36df-4b33-8d10-dde52bd65630-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.184307 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6c6d1dc-36df-4b33-8d10-dde52bd65630-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.184461 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.184765 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6c6d1dc-36df-4b33-8d10-dde52bd65630-kolla-config\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.185036 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d6c6d1dc-36df-4b33-8d10-dde52bd65630-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.185112 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/d6c6d1dc-36df-4b33-8d10-dde52bd65630-secrets\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.185178 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d6c6d1dc-36df-4b33-8d10-dde52bd65630-config-data-default\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.185210 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6c6d1dc-36df-4b33-8d10-dde52bd65630-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.287024 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/d6c6d1dc-36df-4b33-8d10-dde52bd65630-secrets\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.288374 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d6c6d1dc-36df-4b33-8d10-dde52bd65630-config-data-default\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.288421 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6c6d1dc-36df-4b33-8d10-dde52bd65630-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.288494 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2jtc\" (UniqueName: \"kubernetes.io/projected/d6c6d1dc-36df-4b33-8d10-dde52bd65630-kube-api-access-g2jtc\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.288515 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6c6d1dc-36df-4b33-8d10-dde52bd65630-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.288539 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6c6d1dc-36df-4b33-8d10-dde52bd65630-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.288579 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.288612 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6c6d1dc-36df-4b33-8d10-dde52bd65630-kolla-config\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.288659 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d6c6d1dc-36df-4b33-8d10-dde52bd65630-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.289164 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.289482 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d6c6d1dc-36df-4b33-8d10-dde52bd65630-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.290429 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d6c6d1dc-36df-4b33-8d10-dde52bd65630-config-data-default\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.290496 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d6c6d1dc-36df-4b33-8d10-dde52bd65630-kolla-config\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.291878 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6c6d1dc-36df-4b33-8d10-dde52bd65630-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.296086 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6c6d1dc-36df-4b33-8d10-dde52bd65630-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.296476 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/d6c6d1dc-36df-4b33-8d10-dde52bd65630-secrets\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.297628 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6c6d1dc-36df-4b33-8d10-dde52bd65630-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.313003 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2jtc\" (UniqueName: \"kubernetes.io/projected/d6c6d1dc-36df-4b33-8d10-dde52bd65630-kube-api-access-g2jtc\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.326487 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"d6c6d1dc-36df-4b33-8d10-dde52bd65630\") " pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.370672 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.401793 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.403911 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.407979 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.409225 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-nwk96" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.409690 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.410166 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.413190 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.495927 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3e9a3d78-f055-43d2-9d21-579d4a611d49-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.496312 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3e9a3d78-f055-43d2-9d21-579d4a611d49-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.496357 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e9a3d78-f055-43d2-9d21-579d4a611d49-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.496387 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9a3d78-f055-43d2-9d21-579d4a611d49-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.496415 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9a3d78-f055-43d2-9d21-579d4a611d49-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.496482 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.496658 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3e9a3d78-f055-43d2-9d21-579d4a611d49-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.496735 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwspl\" (UniqueName: \"kubernetes.io/projected/3e9a3d78-f055-43d2-9d21-579d4a611d49-kube-api-access-lwspl\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.496802 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3e9a3d78-f055-43d2-9d21-579d4a611d49-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.597907 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3e9a3d78-f055-43d2-9d21-579d4a611d49-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.597984 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3e9a3d78-f055-43d2-9d21-579d4a611d49-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.598011 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9a3d78-f055-43d2-9d21-579d4a611d49-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.598027 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9a3d78-f055-43d2-9d21-579d4a611d49-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.598054 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e9a3d78-f055-43d2-9d21-579d4a611d49-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.598091 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.598169 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3e9a3d78-f055-43d2-9d21-579d4a611d49-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.598213 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwspl\" (UniqueName: \"kubernetes.io/projected/3e9a3d78-f055-43d2-9d21-579d4a611d49-kube-api-access-lwspl\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.598245 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3e9a3d78-f055-43d2-9d21-579d4a611d49-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.598505 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3e9a3d78-f055-43d2-9d21-579d4a611d49-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.598624 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.599087 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3e9a3d78-f055-43d2-9d21-579d4a611d49-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.600382 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3e9a3d78-f055-43d2-9d21-579d4a611d49-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.602441 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e9a3d78-f055-43d2-9d21-579d4a611d49-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.605090 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/3e9a3d78-f055-43d2-9d21-579d4a611d49-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.605101 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9a3d78-f055-43d2-9d21-579d4a611d49-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.616360 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwspl\" (UniqueName: \"kubernetes.io/projected/3e9a3d78-f055-43d2-9d21-579d4a611d49-kube-api-access-lwspl\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.617384 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9a3d78-f055-43d2-9d21-579d4a611d49-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.624112 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"3e9a3d78-f055-43d2-9d21-579d4a611d49\") " pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.731799 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.816640 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" event={"ID":"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1","Type":"ContainerStarted","Data":"7d8592e5c721770e60db7caf8c7082ecb7c711453e180c186b1130df63ba6266"} Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.818468 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" event={"ID":"271aa011-a53f-4340-be88-34fb8b95a78b","Type":"ContainerStarted","Data":"ceb74c241c99605eb909b2db32f010b11e749e19dc74f2beeea4f40f417fffeb"} Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.902643 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.904198 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.908139 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.908266 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-hnl2h" Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.917954 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 02 18:39:54 crc kubenswrapper[4832]: I1002 18:39:54.921552 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.006758 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-kolla-config\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.007046 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-config-data\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.007240 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.007353 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fh65\" (UniqueName: \"kubernetes.io/projected/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-kube-api-access-6fh65\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.007534 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.108873 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-config-data\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.109020 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.109052 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fh65\" (UniqueName: \"kubernetes.io/projected/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-kube-api-access-6fh65\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.109120 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.109165 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-kolla-config\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.110174 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-kolla-config\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.110838 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-config-data\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.113194 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.114873 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.146735 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fh65\" (UniqueName: \"kubernetes.io/projected/84630b52-3d82-4ca3-aa26-0bf1b7ead64d-kube-api-access-6fh65\") pod \"memcached-0\" (UID: \"84630b52-3d82-4ca3-aa26-0bf1b7ead64d\") " pod="openstack/memcached-0" Oct 02 18:39:55 crc kubenswrapper[4832]: I1002 18:39:55.228432 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 02 18:39:56 crc kubenswrapper[4832]: I1002 18:39:56.968061 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 18:39:56 crc kubenswrapper[4832]: I1002 18:39:56.974440 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 02 18:39:56 crc kubenswrapper[4832]: I1002 18:39:56.981460 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-j6mnv" Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.004586 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.142536 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrcmm\" (UniqueName: \"kubernetes.io/projected/1eb62c48-8808-44e9-8fbc-781e0d252f01-kube-api-access-lrcmm\") pod \"kube-state-metrics-0\" (UID: \"1eb62c48-8808-44e9-8fbc-781e0d252f01\") " pod="openstack/kube-state-metrics-0" Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.246110 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrcmm\" (UniqueName: \"kubernetes.io/projected/1eb62c48-8808-44e9-8fbc-781e0d252f01-kube-api-access-lrcmm\") pod \"kube-state-metrics-0\" (UID: \"1eb62c48-8808-44e9-8fbc-781e0d252f01\") " pod="openstack/kube-state-metrics-0" Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.297319 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrcmm\" (UniqueName: \"kubernetes.io/projected/1eb62c48-8808-44e9-8fbc-781e0d252f01-kube-api-access-lrcmm\") pod \"kube-state-metrics-0\" (UID: \"1eb62c48-8808-44e9-8fbc-781e0d252f01\") " pod="openstack/kube-state-metrics-0" Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.306676 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.690190 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp"] Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.691614 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp" Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.694792 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-2mhqg" Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.694963 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.703545 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp"] Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.856546 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28fbc8db-b613-4de9-a177-3f7c5be4d857-serving-cert\") pod \"observability-ui-dashboards-6584dc9448-6ftdp\" (UID: \"28fbc8db-b613-4de9-a177-3f7c5be4d857\") " pod="openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp" Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.856792 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq6kg\" (UniqueName: \"kubernetes.io/projected/28fbc8db-b613-4de9-a177-3f7c5be4d857-kube-api-access-qq6kg\") pod \"observability-ui-dashboards-6584dc9448-6ftdp\" (UID: \"28fbc8db-b613-4de9-a177-3f7c5be4d857\") " pod="openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp" Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.958304 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28fbc8db-b613-4de9-a177-3f7c5be4d857-serving-cert\") pod \"observability-ui-dashboards-6584dc9448-6ftdp\" (UID: \"28fbc8db-b613-4de9-a177-3f7c5be4d857\") " pod="openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp" Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.958342 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq6kg\" (UniqueName: \"kubernetes.io/projected/28fbc8db-b613-4de9-a177-3f7c5be4d857-kube-api-access-qq6kg\") pod \"observability-ui-dashboards-6584dc9448-6ftdp\" (UID: \"28fbc8db-b613-4de9-a177-3f7c5be4d857\") " pod="openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp" Oct 02 18:39:57 crc kubenswrapper[4832]: E1002 18:39:57.958495 4832 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Oct 02 18:39:57 crc kubenswrapper[4832]: E1002 18:39:57.958594 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28fbc8db-b613-4de9-a177-3f7c5be4d857-serving-cert podName:28fbc8db-b613-4de9-a177-3f7c5be4d857 nodeName:}" failed. No retries permitted until 2025-10-02 18:39:58.458570796 +0000 UTC m=+1155.428013658 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/28fbc8db-b613-4de9-a177-3f7c5be4d857-serving-cert") pod "observability-ui-dashboards-6584dc9448-6ftdp" (UID: "28fbc8db-b613-4de9-a177-3f7c5be4d857") : secret "observability-ui-dashboards" not found Oct 02 18:39:57 crc kubenswrapper[4832]: I1002 18:39:57.985136 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq6kg\" (UniqueName: \"kubernetes.io/projected/28fbc8db-b613-4de9-a177-3f7c5be4d857-kube-api-access-qq6kg\") pod \"observability-ui-dashboards-6584dc9448-6ftdp\" (UID: \"28fbc8db-b613-4de9-a177-3f7c5be4d857\") " pod="openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.138645 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d6db6476f-f2x78"] Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.142623 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.156789 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d6db6476f-f2x78"] Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.210429 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.217660 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.220492 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.220644 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.220753 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.220907 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.226737 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-pxmx9" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.228882 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.234953 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.266315 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6bb72ccc-f092-4715-a194-11e468feb838-console-config\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.266387 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6bb72ccc-f092-4715-a194-11e468feb838-oauth-serving-cert\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.266411 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6bb72ccc-f092-4715-a194-11e468feb838-service-ca\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.266492 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dflpx\" (UniqueName: \"kubernetes.io/projected/6bb72ccc-f092-4715-a194-11e468feb838-kube-api-access-dflpx\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.266531 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6bb72ccc-f092-4715-a194-11e468feb838-console-oauth-config\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.266581 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bb72ccc-f092-4715-a194-11e468feb838-console-serving-cert\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.266612 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bb72ccc-f092-4715-a194-11e468feb838-trusted-ca-bundle\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.367829 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/536c7c21-106b-48f8-9238-37b85edbf5f2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.367902 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6bb72ccc-f092-4715-a194-11e468feb838-console-oauth-config\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.367922 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/536c7c21-106b-48f8-9238-37b85edbf5f2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.367992 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bb72ccc-f092-4715-a194-11e468feb838-console-serving-cert\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.368031 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/536c7c21-106b-48f8-9238-37b85edbf5f2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.368050 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bb72ccc-f092-4715-a194-11e468feb838-trusted-ca-bundle\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.368089 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6bb72ccc-f092-4715-a194-11e468feb838-console-config\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.368122 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-config\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.368147 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6bb72ccc-f092-4715-a194-11e468feb838-oauth-serving-cert\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.368168 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6bb72ccc-f092-4715-a194-11e468feb838-service-ca\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.368229 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fknd\" (UniqueName: \"kubernetes.io/projected/536c7c21-106b-48f8-9238-37b85edbf5f2-kube-api-access-2fknd\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.368251 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.368324 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.368340 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.368356 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dflpx\" (UniqueName: \"kubernetes.io/projected/6bb72ccc-f092-4715-a194-11e468feb838-kube-api-access-dflpx\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.369294 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6bb72ccc-f092-4715-a194-11e468feb838-service-ca\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.370151 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6bb72ccc-f092-4715-a194-11e468feb838-console-config\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.370481 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bb72ccc-f092-4715-a194-11e468feb838-trusted-ca-bundle\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.370514 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6bb72ccc-f092-4715-a194-11e468feb838-oauth-serving-cert\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.374678 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bb72ccc-f092-4715-a194-11e468feb838-console-serving-cert\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.384908 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6bb72ccc-f092-4715-a194-11e468feb838-console-oauth-config\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.386473 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dflpx\" (UniqueName: \"kubernetes.io/projected/6bb72ccc-f092-4715-a194-11e468feb838-kube-api-access-dflpx\") pod \"console-5d6db6476f-f2x78\" (UID: \"6bb72ccc-f092-4715-a194-11e468feb838\") " pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.459987 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.469880 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.469935 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.469990 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/536c7c21-106b-48f8-9238-37b85edbf5f2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.470020 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/536c7c21-106b-48f8-9238-37b85edbf5f2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.470088 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/536c7c21-106b-48f8-9238-37b85edbf5f2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.470150 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-config\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.470197 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28fbc8db-b613-4de9-a177-3f7c5be4d857-serving-cert\") pod \"observability-ui-dashboards-6584dc9448-6ftdp\" (UID: \"28fbc8db-b613-4de9-a177-3f7c5be4d857\") " pod="openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.470236 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fknd\" (UniqueName: \"kubernetes.io/projected/536c7c21-106b-48f8-9238-37b85edbf5f2-kube-api-access-2fknd\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.470289 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.471207 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/536c7c21-106b-48f8-9238-37b85edbf5f2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.476834 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/536c7c21-106b-48f8-9238-37b85edbf5f2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.477151 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28fbc8db-b613-4de9-a177-3f7c5be4d857-serving-cert\") pod \"observability-ui-dashboards-6584dc9448-6ftdp\" (UID: \"28fbc8db-b613-4de9-a177-3f7c5be4d857\") " pod="openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.477366 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-config\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.478064 4832 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.478249 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5c8a2e34be458981b4e8afb2283a74ad10fe8fcab075c40f435ae20b523e7bdf/globalmount\"" pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.483808 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.493970 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/536c7c21-106b-48f8-9238-37b85edbf5f2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.494636 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.495975 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fknd\" (UniqueName: \"kubernetes.io/projected/536c7c21-106b-48f8-9238-37b85edbf5f2-kube-api-access-2fknd\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.526643 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\") pod \"prometheus-metric-storage-0\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.560232 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 02 18:39:58 crc kubenswrapper[4832]: I1002 18:39:58.635721 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.662208 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6trqf"] Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.664283 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.669664 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.669863 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-l2vw7" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.677723 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6trqf"] Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.677985 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.694526 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-g6w9z"] Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.698746 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.714589 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-g6w9z"] Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.745028 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74cjc\" (UniqueName: \"kubernetes.io/projected/37ac149f-65bb-4e89-911e-52f0c2434aad-kube-api-access-74cjc\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.745093 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3533b085-2264-41c9-8feb-d8c6f40fa6c1-var-run\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.745134 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3533b085-2264-41c9-8feb-d8c6f40fa6c1-scripts\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.745168 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtcfm\" (UniqueName: \"kubernetes.io/projected/3533b085-2264-41c9-8feb-d8c6f40fa6c1-kube-api-access-jtcfm\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.745226 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/37ac149f-65bb-4e89-911e-52f0c2434aad-etc-ovs\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.745257 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3533b085-2264-41c9-8feb-d8c6f40fa6c1-var-run-ovn\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.745429 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/37ac149f-65bb-4e89-911e-52f0c2434aad-var-run\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.745481 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3533b085-2264-41c9-8feb-d8c6f40fa6c1-var-log-ovn\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.745544 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3533b085-2264-41c9-8feb-d8c6f40fa6c1-combined-ca-bundle\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.745583 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37ac149f-65bb-4e89-911e-52f0c2434aad-scripts\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.745619 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/37ac149f-65bb-4e89-911e-52f0c2434aad-var-lib\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.745662 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/37ac149f-65bb-4e89-911e-52f0c2434aad-var-log\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.745705 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3533b085-2264-41c9-8feb-d8c6f40fa6c1-ovn-controller-tls-certs\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.847127 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3533b085-2264-41c9-8feb-d8c6f40fa6c1-combined-ca-bundle\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.847459 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37ac149f-65bb-4e89-911e-52f0c2434aad-scripts\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.847490 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/37ac149f-65bb-4e89-911e-52f0c2434aad-var-lib\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.848310 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/37ac149f-65bb-4e89-911e-52f0c2434aad-var-lib\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.849847 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37ac149f-65bb-4e89-911e-52f0c2434aad-scripts\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.849999 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/37ac149f-65bb-4e89-911e-52f0c2434aad-var-log\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.850047 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3533b085-2264-41c9-8feb-d8c6f40fa6c1-ovn-controller-tls-certs\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.850165 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74cjc\" (UniqueName: \"kubernetes.io/projected/37ac149f-65bb-4e89-911e-52f0c2434aad-kube-api-access-74cjc\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.850187 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3533b085-2264-41c9-8feb-d8c6f40fa6c1-var-run\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.850209 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3533b085-2264-41c9-8feb-d8c6f40fa6c1-scripts\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.850247 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtcfm\" (UniqueName: \"kubernetes.io/projected/3533b085-2264-41c9-8feb-d8c6f40fa6c1-kube-api-access-jtcfm\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.850324 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/37ac149f-65bb-4e89-911e-52f0c2434aad-etc-ovs\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.850354 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3533b085-2264-41c9-8feb-d8c6f40fa6c1-var-run-ovn\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.850415 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/37ac149f-65bb-4e89-911e-52f0c2434aad-var-run\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.850439 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3533b085-2264-41c9-8feb-d8c6f40fa6c1-var-log-ovn\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.850568 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3533b085-2264-41c9-8feb-d8c6f40fa6c1-var-run\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.850642 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3533b085-2264-41c9-8feb-d8c6f40fa6c1-var-log-ovn\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.851274 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/37ac149f-65bb-4e89-911e-52f0c2434aad-var-log\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.852284 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/37ac149f-65bb-4e89-911e-52f0c2434aad-etc-ovs\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.852282 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3533b085-2264-41c9-8feb-d8c6f40fa6c1-var-run-ovn\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.852406 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/37ac149f-65bb-4e89-911e-52f0c2434aad-var-run\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.853562 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3533b085-2264-41c9-8feb-d8c6f40fa6c1-combined-ca-bundle\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.855406 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3533b085-2264-41c9-8feb-d8c6f40fa6c1-ovn-controller-tls-certs\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.855883 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3533b085-2264-41c9-8feb-d8c6f40fa6c1-scripts\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.870371 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74cjc\" (UniqueName: \"kubernetes.io/projected/37ac149f-65bb-4e89-911e-52f0c2434aad-kube-api-access-74cjc\") pod \"ovn-controller-ovs-g6w9z\" (UID: \"37ac149f-65bb-4e89-911e-52f0c2434aad\") " pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.870824 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtcfm\" (UniqueName: \"kubernetes.io/projected/3533b085-2264-41c9-8feb-d8c6f40fa6c1-kube-api-access-jtcfm\") pod \"ovn-controller-6trqf\" (UID: \"3533b085-2264-41c9-8feb-d8c6f40fa6c1\") " pod="openstack/ovn-controller-6trqf" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.962782 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.965096 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.967545 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.967868 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.968104 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.968418 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.969045 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-rnj65" Oct 02 18:40:00 crc kubenswrapper[4832]: I1002 18:40:00.973992 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.009175 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6trqf" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.023221 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.053755 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.053817 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.053857 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.053917 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.053976 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-config\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.053996 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.054034 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.054054 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42s29\" (UniqueName: \"kubernetes.io/projected/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-kube-api-access-42s29\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.155955 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.156055 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-config\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.156077 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.156125 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.156154 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42s29\" (UniqueName: \"kubernetes.io/projected/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-kube-api-access-42s29\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.156218 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.156250 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.156304 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.156706 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.157758 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.159111 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-config\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.159409 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.164477 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.165878 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.167319 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.176729 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42s29\" (UniqueName: \"kubernetes.io/projected/ccf82d19-ed89-43fc-b2e0-5b8d871db17a-kube-api-access-42s29\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.197634 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ccf82d19-ed89-43fc-b2e0-5b8d871db17a\") " pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:01 crc kubenswrapper[4832]: I1002 18:40:01.335120 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.503054 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.505295 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.513731 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.514093 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.514181 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-26xjb" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.515455 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.520305 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.636624 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04d55a7f-36c2-4f79-9541-3e0bf14963ca-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.636693 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d55a7f-36c2-4f79-9541-3e0bf14963ca-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.636745 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjxv6\" (UniqueName: \"kubernetes.io/projected/04d55a7f-36c2-4f79-9541-3e0bf14963ca-kube-api-access-qjxv6\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.636794 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/04d55a7f-36c2-4f79-9541-3e0bf14963ca-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.636825 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.636851 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04d55a7f-36c2-4f79-9541-3e0bf14963ca-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.636889 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04d55a7f-36c2-4f79-9541-3e0bf14963ca-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.636911 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04d55a7f-36c2-4f79-9541-3e0bf14963ca-config\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.739515 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.739612 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04d55a7f-36c2-4f79-9541-3e0bf14963ca-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.739671 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04d55a7f-36c2-4f79-9541-3e0bf14963ca-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.739746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04d55a7f-36c2-4f79-9541-3e0bf14963ca-config\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.739830 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04d55a7f-36c2-4f79-9541-3e0bf14963ca-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.739882 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d55a7f-36c2-4f79-9541-3e0bf14963ca-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.739933 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjxv6\" (UniqueName: \"kubernetes.io/projected/04d55a7f-36c2-4f79-9541-3e0bf14963ca-kube-api-access-qjxv6\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.739951 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.739994 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/04d55a7f-36c2-4f79-9541-3e0bf14963ca-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.740846 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/04d55a7f-36c2-4f79-9541-3e0bf14963ca-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.741148 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04d55a7f-36c2-4f79-9541-3e0bf14963ca-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.741158 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04d55a7f-36c2-4f79-9541-3e0bf14963ca-config\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.746664 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d55a7f-36c2-4f79-9541-3e0bf14963ca-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.746664 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04d55a7f-36c2-4f79-9541-3e0bf14963ca-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.752945 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04d55a7f-36c2-4f79-9541-3e0bf14963ca-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.757866 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjxv6\" (UniqueName: \"kubernetes.io/projected/04d55a7f-36c2-4f79-9541-3e0bf14963ca-kube-api-access-qjxv6\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.782378 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"04d55a7f-36c2-4f79-9541-3e0bf14963ca\") " pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:04 crc kubenswrapper[4832]: I1002 18:40:04.837905 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:06 crc kubenswrapper[4832]: E1002 18:40:06.465902 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 02 18:40:06 crc kubenswrapper[4832]: E1002 18:40:06.467050 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2c7vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-ms6mp_openstack(0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:40:06 crc kubenswrapper[4832]: E1002 18:40:06.468339 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" podUID="0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa" Oct 02 18:40:06 crc kubenswrapper[4832]: E1002 18:40:06.532465 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 02 18:40:06 crc kubenswrapper[4832]: E1002 18:40:06.532630 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcr25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-bwwxb_openstack(3f2a1028-527d-4a14-b015-d74c5b9ec1df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:40:06 crc kubenswrapper[4832]: E1002 18:40:06.533922 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-bwwxb" podUID="3f2a1028-527d-4a14-b015-d74c5b9ec1df" Oct 02 18:40:07 crc kubenswrapper[4832]: W1002 18:40:07.680085 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ff074fc_c56e_40f3_a327_b829d84c9866.slice/crio-2db439288a0da92db7ed6714a958e0cc6742fe4b130cbced48a5188ad1dca67e WatchSource:0}: Error finding container 2db439288a0da92db7ed6714a958e0cc6742fe4b130cbced48a5188ad1dca67e: Status 404 returned error can't find the container with id 2db439288a0da92db7ed6714a958e0cc6742fe4b130cbced48a5188ad1dca67e Oct 02 18:40:07 crc kubenswrapper[4832]: I1002 18:40:07.691338 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 18:40:07 crc kubenswrapper[4832]: W1002 18:40:07.692659 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9fd4cd0_fd84_45cb_9c68_0985f52a1054.slice/crio-70c3346e26d3494cc77da6d7674cb5cae6d44c053602109b0f8b08f5ffaf2b11 WatchSource:0}: Error finding container 70c3346e26d3494cc77da6d7674cb5cae6d44c053602109b0f8b08f5ffaf2b11: Status 404 returned error can't find the container with id 70c3346e26d3494cc77da6d7674cb5cae6d44c053602109b0f8b08f5ffaf2b11 Oct 02 18:40:07 crc kubenswrapper[4832]: I1002 18:40:07.699961 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 18:40:07 crc kubenswrapper[4832]: I1002 18:40:07.961034 4832 generic.go:334] "Generic (PLEG): container finished" podID="271aa011-a53f-4340-be88-34fb8b95a78b" containerID="d7958a9b16048b9a45c19f62bd74b54067045f233417bd7c7d4384cc2bc3578c" exitCode=0 Oct 02 18:40:07 crc kubenswrapper[4832]: I1002 18:40:07.961391 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" event={"ID":"271aa011-a53f-4340-be88-34fb8b95a78b","Type":"ContainerDied","Data":"d7958a9b16048b9a45c19f62bd74b54067045f233417bd7c7d4384cc2bc3578c"} Oct 02 18:40:07 crc kubenswrapper[4832]: I1002 18:40:07.963013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4ff074fc-c56e-40f3-a327-b829d84c9866","Type":"ContainerStarted","Data":"2db439288a0da92db7ed6714a958e0cc6742fe4b130cbced48a5188ad1dca67e"} Oct 02 18:40:07 crc kubenswrapper[4832]: I1002 18:40:07.968238 4832 generic.go:334] "Generic (PLEG): container finished" podID="6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1" containerID="2734244fbb0afe09c3a729a6488f2bf2f4dcf41605695e4fc8f184bcc446d3e8" exitCode=0 Oct 02 18:40:07 crc kubenswrapper[4832]: I1002 18:40:07.968314 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" event={"ID":"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1","Type":"ContainerDied","Data":"2734244fbb0afe09c3a729a6488f2bf2f4dcf41605695e4fc8f184bcc446d3e8"} Oct 02 18:40:07 crc kubenswrapper[4832]: I1002 18:40:07.970949 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c9fd4cd0-fd84-45cb-9c68-0985f52a1054","Type":"ContainerStarted","Data":"70c3346e26d3494cc77da6d7674cb5cae6d44c053602109b0f8b08f5ffaf2b11"} Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.415622 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 02 18:40:08 crc kubenswrapper[4832]: W1002 18:40:08.436127 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6c6d1dc_36df_4b33_8d10_dde52bd65630.slice/crio-216e146c1446c2e9588bd02a997f2bc966667fcd3875d05ad9151c0016951bdf WatchSource:0}: Error finding container 216e146c1446c2e9588bd02a997f2bc966667fcd3875d05ad9151c0016951bdf: Status 404 returned error can't find the container with id 216e146c1446c2e9588bd02a997f2bc966667fcd3875d05ad9151c0016951bdf Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.438808 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp"] Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.458252 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6trqf"] Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.495066 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.527639 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d6db6476f-f2x78"] Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.559572 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.610166 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bwwxb" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.687957 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c7vt\" (UniqueName: \"kubernetes.io/projected/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-kube-api-access-2c7vt\") pod \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\" (UID: \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\") " Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.688099 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-dns-svc\") pod \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\" (UID: \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\") " Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.688229 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-config\") pod \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\" (UID: \"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa\") " Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.688691 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa" (UID: "0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.688922 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-config" (OuterVolumeSpecName: "config") pod "0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa" (UID: "0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.689023 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.689043 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.693528 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-kube-api-access-2c7vt" (OuterVolumeSpecName: "kube-api-access-2c7vt") pod "0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa" (UID: "0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa"). InnerVolumeSpecName "kube-api-access-2c7vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.790288 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcr25\" (UniqueName: \"kubernetes.io/projected/3f2a1028-527d-4a14-b015-d74c5b9ec1df-kube-api-access-rcr25\") pod \"3f2a1028-527d-4a14-b015-d74c5b9ec1df\" (UID: \"3f2a1028-527d-4a14-b015-d74c5b9ec1df\") " Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.790406 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f2a1028-527d-4a14-b015-d74c5b9ec1df-config\") pod \"3f2a1028-527d-4a14-b015-d74c5b9ec1df\" (UID: \"3f2a1028-527d-4a14-b015-d74c5b9ec1df\") " Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.797720 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f2a1028-527d-4a14-b015-d74c5b9ec1df-config" (OuterVolumeSpecName: "config") pod "3f2a1028-527d-4a14-b015-d74c5b9ec1df" (UID: "3f2a1028-527d-4a14-b015-d74c5b9ec1df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.797862 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c7vt\" (UniqueName: \"kubernetes.io/projected/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa-kube-api-access-2c7vt\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.803211 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f2a1028-527d-4a14-b015-d74c5b9ec1df-kube-api-access-rcr25" (OuterVolumeSpecName: "kube-api-access-rcr25") pod "3f2a1028-527d-4a14-b015-d74c5b9ec1df" (UID: "3f2a1028-527d-4a14-b015-d74c5b9ec1df"). InnerVolumeSpecName "kube-api-access-rcr25". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.899456 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcr25\" (UniqueName: \"kubernetes.io/projected/3f2a1028-527d-4a14-b015-d74c5b9ec1df-kube-api-access-rcr25\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.899517 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f2a1028-527d-4a14-b015-d74c5b9ec1df-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.982509 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" event={"ID":"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1","Type":"ContainerStarted","Data":"6b62ee17c64307bcdc2cf101f347c4efa15988cdcc9f3a36cb3253e1154a9dea"} Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.982644 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.984361 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d6db6476f-f2x78" event={"ID":"6bb72ccc-f092-4715-a194-11e468feb838","Type":"ContainerStarted","Data":"f6d43ef765388b53f6640373d04e4d6436c6b53284eb63b808e6115d0856e7c9"} Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.984406 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d6db6476f-f2x78" event={"ID":"6bb72ccc-f092-4715-a194-11e468feb838","Type":"ContainerStarted","Data":"8625d94b01bf0d9160e61b61cd2b0140c420374c6730d07421abda431df4b79e"} Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.987249 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bwwxb" event={"ID":"3f2a1028-527d-4a14-b015-d74c5b9ec1df","Type":"ContainerDied","Data":"e4286f6369c52920add63533e4832d590e8902e95b4ee12dd30d8c27a38efac7"} Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.987275 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bwwxb" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.988580 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" event={"ID":"0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa","Type":"ContainerDied","Data":"7d473cb3885077b4a081b1222b93793e1c7835b32e90efae69eed7bf55951a51"} Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.988602 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ms6mp" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.990369 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" event={"ID":"271aa011-a53f-4340-be88-34fb8b95a78b","Type":"ContainerStarted","Data":"7c9b5af3ade8828ff8f78b6d26e1e29cbcc30c5ca5a8a8be7da99012b98d0f2b"} Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.991125 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.992532 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d6c6d1dc-36df-4b33-8d10-dde52bd65630","Type":"ContainerStarted","Data":"216e146c1446c2e9588bd02a997f2bc966667fcd3875d05ad9151c0016951bdf"} Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.993882 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6trqf" event={"ID":"3533b085-2264-41c9-8feb-d8c6f40fa6c1","Type":"ContainerStarted","Data":"29e9c096ce16be89969d5d19c8ee081a521bac548810109e9ab37c22de49f49d"} Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.995150 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"84630b52-3d82-4ca3-aa26-0bf1b7ead64d","Type":"ContainerStarted","Data":"3ef375edfeba78c790acf42127e3699ee630bd022d67c88af8f8679e5e47ffb1"} Oct 02 18:40:08 crc kubenswrapper[4832]: I1002 18:40:08.996508 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp" event={"ID":"28fbc8db-b613-4de9-a177-3f7c5be4d857","Type":"ContainerStarted","Data":"b07f9a2ed7e1cfc990ca677d1d5b87391dc5553858d9ee661e939ee3c872f3e9"} Oct 02 18:40:09 crc kubenswrapper[4832]: I1002 18:40:09.027128 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" podStartSLOduration=6.198795756 podStartE2EDuration="19.027101782s" podCreationTimestamp="2025-10-02 18:39:50 +0000 UTC" firstStartedPulling="2025-10-02 18:39:54.098690972 +0000 UTC m=+1151.068133854" lastFinishedPulling="2025-10-02 18:40:06.926997008 +0000 UTC m=+1163.896439880" observedRunningTime="2025-10-02 18:40:09.022036022 +0000 UTC m=+1165.991478894" watchObservedRunningTime="2025-10-02 18:40:09.027101782 +0000 UTC m=+1165.996544654" Oct 02 18:40:09 crc kubenswrapper[4832]: I1002 18:40:09.092684 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ms6mp"] Oct 02 18:40:09 crc kubenswrapper[4832]: I1002 18:40:09.124472 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ms6mp"] Oct 02 18:40:09 crc kubenswrapper[4832]: I1002 18:40:09.132433 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" podStartSLOduration=6.350464948 podStartE2EDuration="19.132414058s" podCreationTimestamp="2025-10-02 18:39:50 +0000 UTC" firstStartedPulling="2025-10-02 18:39:54.09451833 +0000 UTC m=+1151.063961212" lastFinishedPulling="2025-10-02 18:40:06.87646745 +0000 UTC m=+1163.845910322" observedRunningTime="2025-10-02 18:40:09.068734779 +0000 UTC m=+1166.038177661" watchObservedRunningTime="2025-10-02 18:40:09.132414058 +0000 UTC m=+1166.101856930" Oct 02 18:40:09 crc kubenswrapper[4832]: W1002 18:40:09.141783 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eb62c48_8808_44e9_8fbc_781e0d252f01.slice/crio-a30a0fcdfa245af32363de0801496ac2b747ab4108c1498729d6fb759a6799b4 WatchSource:0}: Error finding container a30a0fcdfa245af32363de0801496ac2b747ab4108c1498729d6fb759a6799b4: Status 404 returned error can't find the container with id a30a0fcdfa245af32363de0801496ac2b747ab4108c1498729d6fb759a6799b4 Oct 02 18:40:09 crc kubenswrapper[4832]: I1002 18:40:09.148953 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 02 18:40:09 crc kubenswrapper[4832]: W1002 18:40:09.170429 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e9a3d78_f055_43d2_9d21_579d4a611d49.slice/crio-9bf1d2a720d8c194d0bf30fb394e7f07d18abbee6345f0fac1585ee61d16ca6b WatchSource:0}: Error finding container 9bf1d2a720d8c194d0bf30fb394e7f07d18abbee6345f0fac1585ee61d16ca6b: Status 404 returned error can't find the container with id 9bf1d2a720d8c194d0bf30fb394e7f07d18abbee6345f0fac1585ee61d16ca6b Oct 02 18:40:09 crc kubenswrapper[4832]: I1002 18:40:09.189454 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwwxb"] Oct 02 18:40:09 crc kubenswrapper[4832]: I1002 18:40:09.203063 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 18:40:09 crc kubenswrapper[4832]: I1002 18:40:09.215992 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bwwxb"] Oct 02 18:40:09 crc kubenswrapper[4832]: I1002 18:40:09.239037 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa" path="/var/lib/kubelet/pods/0a2d0d93-6a87-4ee4-a1fc-c375b9a6f5fa/volumes" Oct 02 18:40:09 crc kubenswrapper[4832]: I1002 18:40:09.239587 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f2a1028-527d-4a14-b015-d74c5b9ec1df" path="/var/lib/kubelet/pods/3f2a1028-527d-4a14-b015-d74c5b9ec1df/volumes" Oct 02 18:40:09 crc kubenswrapper[4832]: I1002 18:40:09.251427 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 02 18:40:09 crc kubenswrapper[4832]: I1002 18:40:09.253901 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d6db6476f-f2x78" podStartSLOduration=11.253879542 podStartE2EDuration="11.253879542s" podCreationTimestamp="2025-10-02 18:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:40:09.139577503 +0000 UTC m=+1166.109020375" watchObservedRunningTime="2025-10-02 18:40:09.253879542 +0000 UTC m=+1166.223322414" Oct 02 18:40:09 crc kubenswrapper[4832]: I1002 18:40:09.460219 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 02 18:40:10 crc kubenswrapper[4832]: I1002 18:40:10.009779 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1eb62c48-8808-44e9-8fbc-781e0d252f01","Type":"ContainerStarted","Data":"a30a0fcdfa245af32363de0801496ac2b747ab4108c1498729d6fb759a6799b4"} Oct 02 18:40:10 crc kubenswrapper[4832]: I1002 18:40:10.010840 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3e9a3d78-f055-43d2-9d21-579d4a611d49","Type":"ContainerStarted","Data":"9bf1d2a720d8c194d0bf30fb394e7f07d18abbee6345f0fac1585ee61d16ca6b"} Oct 02 18:40:10 crc kubenswrapper[4832]: I1002 18:40:10.011935 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ccf82d19-ed89-43fc-b2e0-5b8d871db17a","Type":"ContainerStarted","Data":"2c5923b8ce39fc7354ce01d90501522b106d477dc995d61c3ebff496d6dd1003"} Oct 02 18:40:10 crc kubenswrapper[4832]: I1002 18:40:10.014916 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"536c7c21-106b-48f8-9238-37b85edbf5f2","Type":"ContainerStarted","Data":"cf42540a4b89b11c965629986713a0052e5b79dce1ba24d67e2820a833bdba9f"} Oct 02 18:40:10 crc kubenswrapper[4832]: I1002 18:40:10.179410 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 02 18:40:10 crc kubenswrapper[4832]: W1002 18:40:10.190285 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04d55a7f_36c2_4f79_9541_3e0bf14963ca.slice/crio-f9d3de0adca9558346d06f1d6d816de76e38c70052e1cf0c8f314140e82336f7 WatchSource:0}: Error finding container f9d3de0adca9558346d06f1d6d816de76e38c70052e1cf0c8f314140e82336f7: Status 404 returned error can't find the container with id f9d3de0adca9558346d06f1d6d816de76e38c70052e1cf0c8f314140e82336f7 Oct 02 18:40:10 crc kubenswrapper[4832]: I1002 18:40:10.530310 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-g6w9z"] Oct 02 18:40:11 crc kubenswrapper[4832]: I1002 18:40:11.032991 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"04d55a7f-36c2-4f79-9541-3e0bf14963ca","Type":"ContainerStarted","Data":"f9d3de0adca9558346d06f1d6d816de76e38c70052e1cf0c8f314140e82336f7"} Oct 02 18:40:13 crc kubenswrapper[4832]: I1002 18:40:13.918837 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-d5thb"] Oct 02 18:40:13 crc kubenswrapper[4832]: I1002 18:40:13.920835 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:13 crc kubenswrapper[4832]: I1002 18:40:13.924027 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 02 18:40:13 crc kubenswrapper[4832]: I1002 18:40:13.929656 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-d5thb"] Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.014150 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdf2a425-f35e-436a-ad17-c85f29e03490-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.014297 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdf2a425-f35e-436a-ad17-c85f29e03490-combined-ca-bundle\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.014334 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlhp2\" (UniqueName: \"kubernetes.io/projected/cdf2a425-f35e-436a-ad17-c85f29e03490-kube-api-access-rlhp2\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.014367 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cdf2a425-f35e-436a-ad17-c85f29e03490-ovs-rundir\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.014420 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf2a425-f35e-436a-ad17-c85f29e03490-config\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.014577 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cdf2a425-f35e-436a-ad17-c85f29e03490-ovn-rundir\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.090167 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jlvv4"] Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.090514 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" podUID="271aa011-a53f-4340-be88-34fb8b95a78b" containerName="dnsmasq-dns" containerID="cri-o://7c9b5af3ade8828ff8f78b6d26e1e29cbcc30c5ca5a8a8be7da99012b98d0f2b" gracePeriod=10 Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.094430 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.116893 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdf2a425-f35e-436a-ad17-c85f29e03490-combined-ca-bundle\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.117226 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlhp2\" (UniqueName: \"kubernetes.io/projected/cdf2a425-f35e-436a-ad17-c85f29e03490-kube-api-access-rlhp2\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.117255 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cdf2a425-f35e-436a-ad17-c85f29e03490-ovs-rundir\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.117388 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf2a425-f35e-436a-ad17-c85f29e03490-config\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.117413 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cdf2a425-f35e-436a-ad17-c85f29e03490-ovn-rundir\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.117469 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdf2a425-f35e-436a-ad17-c85f29e03490-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.118073 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cdf2a425-f35e-436a-ad17-c85f29e03490-ovn-rundir\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.118143 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cdf2a425-f35e-436a-ad17-c85f29e03490-ovs-rundir\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.118520 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdf2a425-f35e-436a-ad17-c85f29e03490-config\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.129162 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdf2a425-f35e-436a-ad17-c85f29e03490-combined-ca-bundle\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.147052 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bvbpc"] Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.149210 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.151912 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.159442 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdf2a425-f35e-436a-ad17-c85f29e03490-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.167882 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlhp2\" (UniqueName: \"kubernetes.io/projected/cdf2a425-f35e-436a-ad17-c85f29e03490-kube-api-access-rlhp2\") pod \"ovn-controller-metrics-d5thb\" (UID: \"cdf2a425-f35e-436a-ad17-c85f29e03490\") " pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.193574 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bvbpc"] Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.254188 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-d5thb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.257469 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-pzwvs"] Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.257746 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" podUID="6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1" containerName="dnsmasq-dns" containerID="cri-o://6b62ee17c64307bcdc2cf101f347c4efa15988cdcc9f3a36cb3253e1154a9dea" gracePeriod=10 Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.259551 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.306179 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hmx56"] Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.323280 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.323461 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-kube-api-access-zsdfl\") pod \"dnsmasq-dns-7fd796d7df-bvbpc\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.323718 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-bvbpc\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.323781 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-bvbpc\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.323951 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-config\") pod \"dnsmasq-dns-7fd796d7df-bvbpc\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.326052 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.353869 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hmx56"] Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.425386 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.425438 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.425493 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-bvbpc\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.425520 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-config\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.425547 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-bvbpc\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.425576 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4jn2\" (UniqueName: \"kubernetes.io/projected/1982fd3e-190b-4955-9a76-6a35524fdaa1-kube-api-access-f4jn2\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.425597 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.425637 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-config\") pod \"dnsmasq-dns-7fd796d7df-bvbpc\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.425659 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-kube-api-access-zsdfl\") pod \"dnsmasq-dns-7fd796d7df-bvbpc\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.426705 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-bvbpc\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.427195 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-bvbpc\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.427752 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-config\") pod \"dnsmasq-dns-7fd796d7df-bvbpc\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.442004 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-kube-api-access-zsdfl\") pod \"dnsmasq-dns-7fd796d7df-bvbpc\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.527701 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.527762 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.527811 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-config\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.527862 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4jn2\" (UniqueName: \"kubernetes.io/projected/1982fd3e-190b-4955-9a76-6a35524fdaa1-kube-api-access-f4jn2\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.527892 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.528896 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.529107 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.529161 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.529243 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-config\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.579025 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4jn2\" (UniqueName: \"kubernetes.io/projected/1982fd3e-190b-4955-9a76-6a35524fdaa1-kube-api-access-f4jn2\") pod \"dnsmasq-dns-86db49b7ff-hmx56\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.599177 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:14 crc kubenswrapper[4832]: I1002 18:40:14.712029 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:15 crc kubenswrapper[4832]: I1002 18:40:15.091524 4832 generic.go:334] "Generic (PLEG): container finished" podID="6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1" containerID="6b62ee17c64307bcdc2cf101f347c4efa15988cdcc9f3a36cb3253e1154a9dea" exitCode=0 Oct 02 18:40:15 crc kubenswrapper[4832]: I1002 18:40:15.091606 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" event={"ID":"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1","Type":"ContainerDied","Data":"6b62ee17c64307bcdc2cf101f347c4efa15988cdcc9f3a36cb3253e1154a9dea"} Oct 02 18:40:15 crc kubenswrapper[4832]: I1002 18:40:15.093159 4832 generic.go:334] "Generic (PLEG): container finished" podID="271aa011-a53f-4340-be88-34fb8b95a78b" containerID="7c9b5af3ade8828ff8f78b6d26e1e29cbcc30c5ca5a8a8be7da99012b98d0f2b" exitCode=0 Oct 02 18:40:15 crc kubenswrapper[4832]: I1002 18:40:15.093178 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" event={"ID":"271aa011-a53f-4340-be88-34fb8b95a78b","Type":"ContainerDied","Data":"7c9b5af3ade8828ff8f78b6d26e1e29cbcc30c5ca5a8a8be7da99012b98d0f2b"} Oct 02 18:40:15 crc kubenswrapper[4832]: I1002 18:40:15.621100 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" podUID="6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Oct 02 18:40:15 crc kubenswrapper[4832]: I1002 18:40:15.933566 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" podUID="271aa011-a53f-4340-be88-34fb8b95a78b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Oct 02 18:40:17 crc kubenswrapper[4832]: I1002 18:40:17.127478 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g6w9z" event={"ID":"37ac149f-65bb-4e89-911e-52f0c2434aad","Type":"ContainerStarted","Data":"ac97b57468488824b9e59c6b576e9e6c3807af4445f35ba8babd2523874481d7"} Oct 02 18:40:18 crc kubenswrapper[4832]: I1002 18:40:18.461164 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:40:18 crc kubenswrapper[4832]: I1002 18:40:18.461526 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:40:18 crc kubenswrapper[4832]: I1002 18:40:18.468372 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:40:19 crc kubenswrapper[4832]: I1002 18:40:19.150841 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d6db6476f-f2x78" Oct 02 18:40:19 crc kubenswrapper[4832]: I1002 18:40:19.212447 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-cf69dd54d-z6zmc"] Oct 02 18:40:23 crc kubenswrapper[4832]: E1002 18:40:23.212772 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Oct 02 18:40:23 crc kubenswrapper[4832]: E1002 18:40:23.213687 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8h558h8h5b4h646h647h576h76h5bfh577h65fh4h64bh57dhf9h664hb6h697h66fh68ch5f8h5cfhf8h578h8bh6bh698h9h68fhd8hcdhcbq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtcfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-6trqf_openstack(3533b085-2264-41c9-8feb-d8c6f40fa6c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:40:23 crc kubenswrapper[4832]: E1002 18:40:23.214936 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-6trqf" podUID="3533b085-2264-41c9-8feb-d8c6f40fa6c1" Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.502118 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.521529 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.685245 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-dns-svc\") pod \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\" (UID: \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\") " Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.685755 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8crj\" (UniqueName: \"kubernetes.io/projected/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-kube-api-access-v8crj\") pod \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\" (UID: \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\") " Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.685825 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/271aa011-a53f-4340-be88-34fb8b95a78b-config\") pod \"271aa011-a53f-4340-be88-34fb8b95a78b\" (UID: \"271aa011-a53f-4340-be88-34fb8b95a78b\") " Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.685933 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-config\") pod \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\" (UID: \"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1\") " Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.685965 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dttlj\" (UniqueName: \"kubernetes.io/projected/271aa011-a53f-4340-be88-34fb8b95a78b-kube-api-access-dttlj\") pod \"271aa011-a53f-4340-be88-34fb8b95a78b\" (UID: \"271aa011-a53f-4340-be88-34fb8b95a78b\") " Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.685991 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/271aa011-a53f-4340-be88-34fb8b95a78b-dns-svc\") pod \"271aa011-a53f-4340-be88-34fb8b95a78b\" (UID: \"271aa011-a53f-4340-be88-34fb8b95a78b\") " Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.696460 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/271aa011-a53f-4340-be88-34fb8b95a78b-kube-api-access-dttlj" (OuterVolumeSpecName: "kube-api-access-dttlj") pod "271aa011-a53f-4340-be88-34fb8b95a78b" (UID: "271aa011-a53f-4340-be88-34fb8b95a78b"). InnerVolumeSpecName "kube-api-access-dttlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.714856 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-kube-api-access-v8crj" (OuterVolumeSpecName: "kube-api-access-v8crj") pod "6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1" (UID: "6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1"). InnerVolumeSpecName "kube-api-access-v8crj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.778223 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/271aa011-a53f-4340-be88-34fb8b95a78b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "271aa011-a53f-4340-be88-34fb8b95a78b" (UID: "271aa011-a53f-4340-be88-34fb8b95a78b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.789035 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dttlj\" (UniqueName: \"kubernetes.io/projected/271aa011-a53f-4340-be88-34fb8b95a78b-kube-api-access-dttlj\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.789065 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/271aa011-a53f-4340-be88-34fb8b95a78b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.789074 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8crj\" (UniqueName: \"kubernetes.io/projected/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-kube-api-access-v8crj\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.790890 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-config" (OuterVolumeSpecName: "config") pod "6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1" (UID: "6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.796710 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1" (UID: "6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.800842 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/271aa011-a53f-4340-be88-34fb8b95a78b-config" (OuterVolumeSpecName: "config") pod "271aa011-a53f-4340-be88-34fb8b95a78b" (UID: "271aa011-a53f-4340-be88-34fb8b95a78b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.890798 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/271aa011-a53f-4340-be88-34fb8b95a78b-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.892216 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:23 crc kubenswrapper[4832]: I1002 18:40:23.892252 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:24 crc kubenswrapper[4832]: I1002 18:40:24.067602 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bvbpc"] Oct 02 18:40:24 crc kubenswrapper[4832]: I1002 18:40:24.208977 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hmx56"] Oct 02 18:40:24 crc kubenswrapper[4832]: I1002 18:40:24.233540 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" event={"ID":"6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1","Type":"ContainerDied","Data":"7d8592e5c721770e60db7caf8c7082ecb7c711453e180c186b1130df63ba6266"} Oct 02 18:40:24 crc kubenswrapper[4832]: I1002 18:40:24.233599 4832 scope.go:117] "RemoveContainer" containerID="6b62ee17c64307bcdc2cf101f347c4efa15988cdcc9f3a36cb3253e1154a9dea" Oct 02 18:40:24 crc kubenswrapper[4832]: I1002 18:40:24.233733 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" Oct 02 18:40:24 crc kubenswrapper[4832]: I1002 18:40:24.241503 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" event={"ID":"271aa011-a53f-4340-be88-34fb8b95a78b","Type":"ContainerDied","Data":"ceb74c241c99605eb909b2db32f010b11e749e19dc74f2beeea4f40f417fffeb"} Oct 02 18:40:24 crc kubenswrapper[4832]: I1002 18:40:24.241542 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" Oct 02 18:40:24 crc kubenswrapper[4832]: E1002 18:40:24.242928 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-6trqf" podUID="3533b085-2264-41c9-8feb-d8c6f40fa6c1" Oct 02 18:40:24 crc kubenswrapper[4832]: I1002 18:40:24.288408 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-pzwvs"] Oct 02 18:40:24 crc kubenswrapper[4832]: I1002 18:40:24.298625 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-pzwvs"] Oct 02 18:40:24 crc kubenswrapper[4832]: I1002 18:40:24.308201 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jlvv4"] Oct 02 18:40:24 crc kubenswrapper[4832]: I1002 18:40:24.315766 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jlvv4"] Oct 02 18:40:24 crc kubenswrapper[4832]: I1002 18:40:24.365321 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-d5thb"] Oct 02 18:40:24 crc kubenswrapper[4832]: W1002 18:40:24.931070 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cdfa6d5_5b7f_4f34_8fbd_cdff31666de4.slice/crio-bca1eee8ca7dd952339ea080269c3d4c41653383fa30e077ebaf97755827f096 WatchSource:0}: Error finding container bca1eee8ca7dd952339ea080269c3d4c41653383fa30e077ebaf97755827f096: Status 404 returned error can't find the container with id bca1eee8ca7dd952339ea080269c3d4c41653383fa30e077ebaf97755827f096 Oct 02 18:40:24 crc kubenswrapper[4832]: W1002 18:40:24.940364 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1982fd3e_190b_4955_9a76_6a35524fdaa1.slice/crio-60bd859e0c54160a9f00b53b6c83608126a02dd839edbf0fd2d035197171dde4 WatchSource:0}: Error finding container 60bd859e0c54160a9f00b53b6c83608126a02dd839edbf0fd2d035197171dde4: Status 404 returned error can't find the container with id 60bd859e0c54160a9f00b53b6c83608126a02dd839edbf0fd2d035197171dde4 Oct 02 18:40:24 crc kubenswrapper[4832]: I1002 18:40:24.960472 4832 scope.go:117] "RemoveContainer" containerID="2734244fbb0afe09c3a729a6488f2bf2f4dcf41605695e4fc8f184bcc446d3e8" Oct 02 18:40:25 crc kubenswrapper[4832]: I1002 18:40:25.141468 4832 scope.go:117] "RemoveContainer" containerID="7c9b5af3ade8828ff8f78b6d26e1e29cbcc30c5ca5a8a8be7da99012b98d0f2b" Oct 02 18:40:25 crc kubenswrapper[4832]: I1002 18:40:25.244638 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="271aa011-a53f-4340-be88-34fb8b95a78b" path="/var/lib/kubelet/pods/271aa011-a53f-4340-be88-34fb8b95a78b/volumes" Oct 02 18:40:25 crc kubenswrapper[4832]: I1002 18:40:25.245755 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1" path="/var/lib/kubelet/pods/6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1/volumes" Oct 02 18:40:25 crc kubenswrapper[4832]: I1002 18:40:25.252187 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" event={"ID":"1982fd3e-190b-4955-9a76-6a35524fdaa1","Type":"ContainerStarted","Data":"60bd859e0c54160a9f00b53b6c83608126a02dd839edbf0fd2d035197171dde4"} Oct 02 18:40:25 crc kubenswrapper[4832]: I1002 18:40:25.256693 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-d5thb" event={"ID":"cdf2a425-f35e-436a-ad17-c85f29e03490","Type":"ContainerStarted","Data":"e8136b0a7100f23bf329e86d2d3b96f1eb6ffa3580486f3f776bd2b4e01b5842"} Oct 02 18:40:25 crc kubenswrapper[4832]: I1002 18:40:25.274827 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" event={"ID":"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4","Type":"ContainerStarted","Data":"bca1eee8ca7dd952339ea080269c3d4c41653383fa30e077ebaf97755827f096"} Oct 02 18:40:25 crc kubenswrapper[4832]: I1002 18:40:25.469166 4832 scope.go:117] "RemoveContainer" containerID="d7958a9b16048b9a45c19f62bd74b54067045f233417bd7c7d4384cc2bc3578c" Oct 02 18:40:25 crc kubenswrapper[4832]: I1002 18:40:25.621010 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc8479f9-pzwvs" podUID="6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Oct 02 18:40:25 crc kubenswrapper[4832]: I1002 18:40:25.934085 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-jlvv4" podUID="271aa011-a53f-4340-be88-34fb8b95a78b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Oct 02 18:40:26 crc kubenswrapper[4832]: I1002 18:40:26.287032 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp" event={"ID":"28fbc8db-b613-4de9-a177-3f7c5be4d857","Type":"ContainerStarted","Data":"699d2961cbb3ec90477121034690e1398eb59800b4d27adea23f98cef3753ca9"} Oct 02 18:40:26 crc kubenswrapper[4832]: I1002 18:40:26.292370 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d6c6d1dc-36df-4b33-8d10-dde52bd65630","Type":"ContainerStarted","Data":"3d9d13f4b92f56658e7705ad2009f88b32626f6edc7ac63f1ee2f7242a4496c3"} Oct 02 18:40:26 crc kubenswrapper[4832]: I1002 18:40:26.298031 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"84630b52-3d82-4ca3-aa26-0bf1b7ead64d","Type":"ContainerStarted","Data":"66efb380f550ad71f918ebf2cba966d930fcfd440d142e2df01436a0b7616285"} Oct 02 18:40:26 crc kubenswrapper[4832]: I1002 18:40:26.299191 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 02 18:40:26 crc kubenswrapper[4832]: I1002 18:40:26.332069 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-6584dc9448-6ftdp" podStartSLOduration=14.493663172 podStartE2EDuration="29.332050335s" podCreationTimestamp="2025-10-02 18:39:57 +0000 UTC" firstStartedPulling="2025-10-02 18:40:08.457407606 +0000 UTC m=+1165.426850478" lastFinishedPulling="2025-10-02 18:40:23.295794749 +0000 UTC m=+1180.265237641" observedRunningTime="2025-10-02 18:40:26.300575926 +0000 UTC m=+1183.270018818" watchObservedRunningTime="2025-10-02 18:40:26.332050335 +0000 UTC m=+1183.301493207" Oct 02 18:40:26 crc kubenswrapper[4832]: I1002 18:40:26.356758 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.588682096 podStartE2EDuration="32.3567389s" podCreationTimestamp="2025-10-02 18:39:54 +0000 UTC" firstStartedPulling="2025-10-02 18:40:08.516479401 +0000 UTC m=+1165.485922273" lastFinishedPulling="2025-10-02 18:40:23.284536205 +0000 UTC m=+1180.253979077" observedRunningTime="2025-10-02 18:40:26.349149042 +0000 UTC m=+1183.318591914" watchObservedRunningTime="2025-10-02 18:40:26.3567389 +0000 UTC m=+1183.326181772" Oct 02 18:40:27 crc kubenswrapper[4832]: I1002 18:40:27.338379 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4ff074fc-c56e-40f3-a327-b829d84c9866","Type":"ContainerStarted","Data":"b21ca3741d0fca37291dae0e91f9ff546ba06e2cd3972007eb0e57226a7359cc"} Oct 02 18:40:28 crc kubenswrapper[4832]: I1002 18:40:28.349986 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c9fd4cd0-fd84-45cb-9c68-0985f52a1054","Type":"ContainerStarted","Data":"aab15a231a3fad3f097a709bc28c3653595cdc6fce81258a72e2be1d69cc979f"} Oct 02 18:40:28 crc kubenswrapper[4832]: I1002 18:40:28.353657 4832 generic.go:334] "Generic (PLEG): container finished" podID="1982fd3e-190b-4955-9a76-6a35524fdaa1" containerID="31d84ac5414f729d7195792a78f549256969ca0db2427c712df9be944f7fb4de" exitCode=0 Oct 02 18:40:28 crc kubenswrapper[4832]: I1002 18:40:28.353716 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" event={"ID":"1982fd3e-190b-4955-9a76-6a35524fdaa1","Type":"ContainerDied","Data":"31d84ac5414f729d7195792a78f549256969ca0db2427c712df9be944f7fb4de"} Oct 02 18:40:28 crc kubenswrapper[4832]: I1002 18:40:28.359914 4832 generic.go:334] "Generic (PLEG): container finished" podID="37ac149f-65bb-4e89-911e-52f0c2434aad" containerID="0579ba6fedd8cdaba12f684782bfacc480ae4aad364c396935b3fe5ed34f191a" exitCode=0 Oct 02 18:40:28 crc kubenswrapper[4832]: I1002 18:40:28.360001 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g6w9z" event={"ID":"37ac149f-65bb-4e89-911e-52f0c2434aad","Type":"ContainerDied","Data":"0579ba6fedd8cdaba12f684782bfacc480ae4aad364c396935b3fe5ed34f191a"} Oct 02 18:40:29 crc kubenswrapper[4832]: I1002 18:40:29.376960 4832 generic.go:334] "Generic (PLEG): container finished" podID="1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4" containerID="ba86701e0fa7dde9f58a4d247d85d321138386cdc7ff8b4b4076ad4a13b18af1" exitCode=0 Oct 02 18:40:29 crc kubenswrapper[4832]: I1002 18:40:29.378169 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" event={"ID":"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4","Type":"ContainerDied","Data":"ba86701e0fa7dde9f58a4d247d85d321138386cdc7ff8b4b4076ad4a13b18af1"} Oct 02 18:40:29 crc kubenswrapper[4832]: I1002 18:40:29.381710 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3e9a3d78-f055-43d2-9d21-579d4a611d49","Type":"ContainerStarted","Data":"c5266726375613cc8f5b7a69c1278223c302467e28fd41a49193c486221d639d"} Oct 02 18:40:29 crc kubenswrapper[4832]: I1002 18:40:29.383135 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ccf82d19-ed89-43fc-b2e0-5b8d871db17a","Type":"ContainerStarted","Data":"338a410536562ccda919ea293d133694d6695facc05486a4fbcb3dfbdd2e6f8b"} Oct 02 18:40:30 crc kubenswrapper[4832]: I1002 18:40:30.231473 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 02 18:40:30 crc kubenswrapper[4832]: I1002 18:40:30.393770 4832 generic.go:334] "Generic (PLEG): container finished" podID="d6c6d1dc-36df-4b33-8d10-dde52bd65630" containerID="3d9d13f4b92f56658e7705ad2009f88b32626f6edc7ac63f1ee2f7242a4496c3" exitCode=0 Oct 02 18:40:30 crc kubenswrapper[4832]: I1002 18:40:30.394765 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d6c6d1dc-36df-4b33-8d10-dde52bd65630","Type":"ContainerDied","Data":"3d9d13f4b92f56658e7705ad2009f88b32626f6edc7ac63f1ee2f7242a4496c3"} Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.417281 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g6w9z" event={"ID":"37ac149f-65bb-4e89-911e-52f0c2434aad","Type":"ContainerStarted","Data":"e7adfc9064ac0ab465656a27a3d9fd172c86890d2e1825b340a7ab685dfee8ef"} Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.417777 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-g6w9z" event={"ID":"37ac149f-65bb-4e89-911e-52f0c2434aad","Type":"ContainerStarted","Data":"663808939b65dcf4c73eefb41dd3629f28c0eab24edcecd4252f149cbd73c62f"} Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.422864 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-d5thb" event={"ID":"cdf2a425-f35e-436a-ad17-c85f29e03490","Type":"ContainerStarted","Data":"9705e4837f216ce5721380b1632e56f582d8aa19fbb511d3338c5313f88afbb8"} Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.437393 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"04d55a7f-36c2-4f79-9541-3e0bf14963ca","Type":"ContainerStarted","Data":"95d07f0b612e77715279d6e57ccb4ef405cc47c3c51e165bfd202d24de0d1510"} Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.437623 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"04d55a7f-36c2-4f79-9541-3e0bf14963ca","Type":"ContainerStarted","Data":"3285a47182db300619e9a384a55cb004c331ca3fcbc2284de53468fcaa0c1f2d"} Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.443155 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ccf82d19-ed89-43fc-b2e0-5b8d871db17a","Type":"ContainerStarted","Data":"52fae258eadd889e67467e8aafcdbc3a7c64932e908b4bfc9ed8036b757fde15"} Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.447697 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d6c6d1dc-36df-4b33-8d10-dde52bd65630","Type":"ContainerStarted","Data":"98c6e8220817e5e3101e8a153583571fe168c8347a1dfdd7111bfa7aa95bc8d4"} Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.447685 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-g6w9z" podStartSLOduration=22.803953767 podStartE2EDuration="31.447661193s" podCreationTimestamp="2025-10-02 18:40:00 +0000 UTC" firstStartedPulling="2025-10-02 18:40:16.466385955 +0000 UTC m=+1173.435828827" lastFinishedPulling="2025-10-02 18:40:25.110093381 +0000 UTC m=+1182.079536253" observedRunningTime="2025-10-02 18:40:31.446675543 +0000 UTC m=+1188.416118415" watchObservedRunningTime="2025-10-02 18:40:31.447661193 +0000 UTC m=+1188.417104065" Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.452503 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" event={"ID":"1982fd3e-190b-4955-9a76-6a35524fdaa1","Type":"ContainerStarted","Data":"f8acf50c09acfcd48ae5bd1112be2bdf423dbcb5d28ae7b6140598d03768a67a"} Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.453642 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.459018 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1eb62c48-8808-44e9-8fbc-781e0d252f01","Type":"ContainerStarted","Data":"414911ca9e1a1d92fcc7716770a0ac8e7d74081d10394f6ede3d2691b0cf872e"} Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.459164 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.460956 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"536c7c21-106b-48f8-9238-37b85edbf5f2","Type":"ContainerStarted","Data":"05fddf18aa25dc2b83a394bc823c1a48f99b5985179b5382fdcf393572c7197a"} Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.463525 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" event={"ID":"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4","Type":"ContainerStarted","Data":"f6f3fb37f75bc84037f65fe8c1ec53e19d3b79b56db59dff906b41f955caab74"} Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.463687 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.471338 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=13.380182758 podStartE2EDuration="28.471318436s" podCreationTimestamp="2025-10-02 18:40:03 +0000 UTC" firstStartedPulling="2025-10-02 18:40:10.193896945 +0000 UTC m=+1167.163339817" lastFinishedPulling="2025-10-02 18:40:25.285032613 +0000 UTC m=+1182.254475495" observedRunningTime="2025-10-02 18:40:31.469646514 +0000 UTC m=+1188.439089386" watchObservedRunningTime="2025-10-02 18:40:31.471318436 +0000 UTC m=+1188.440761298" Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.492460 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-d5thb" podStartSLOduration=13.979633266 podStartE2EDuration="18.49243995s" podCreationTimestamp="2025-10-02 18:40:13 +0000 UTC" firstStartedPulling="2025-10-02 18:40:24.960512884 +0000 UTC m=+1181.929955776" lastFinishedPulling="2025-10-02 18:40:29.473319588 +0000 UTC m=+1186.442762460" observedRunningTime="2025-10-02 18:40:31.488978801 +0000 UTC m=+1188.458421673" watchObservedRunningTime="2025-10-02 18:40:31.49243995 +0000 UTC m=+1188.461882822" Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.528538 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.562821652 podStartE2EDuration="32.528514101s" podCreationTimestamp="2025-10-02 18:39:59 +0000 UTC" firstStartedPulling="2025-10-02 18:40:09.486754143 +0000 UTC m=+1166.456197015" lastFinishedPulling="2025-10-02 18:40:29.452446592 +0000 UTC m=+1186.421889464" observedRunningTime="2025-10-02 18:40:31.527890992 +0000 UTC m=+1188.497333864" watchObservedRunningTime="2025-10-02 18:40:31.528514101 +0000 UTC m=+1188.497956973" Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.652632 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" podStartSLOduration=17.652607898 podStartE2EDuration="17.652607898s" podCreationTimestamp="2025-10-02 18:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:40:31.600915205 +0000 UTC m=+1188.570358077" watchObservedRunningTime="2025-10-02 18:40:31.652607898 +0000 UTC m=+1188.622050780" Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.663205 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.46430708 podStartE2EDuration="35.66317781s" podCreationTimestamp="2025-10-02 18:39:56 +0000 UTC" firstStartedPulling="2025-10-02 18:40:09.150024731 +0000 UTC m=+1166.119467603" lastFinishedPulling="2025-10-02 18:40:29.348895461 +0000 UTC m=+1186.318338333" observedRunningTime="2025-10-02 18:40:31.624353881 +0000 UTC m=+1188.593796763" watchObservedRunningTime="2025-10-02 18:40:31.66317781 +0000 UTC m=+1188.632620682" Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.671506 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.769893733 podStartE2EDuration="38.67147957s" podCreationTimestamp="2025-10-02 18:39:53 +0000 UTC" firstStartedPulling="2025-10-02 18:40:08.439415731 +0000 UTC m=+1165.408858603" lastFinishedPulling="2025-10-02 18:40:23.341001568 +0000 UTC m=+1180.310444440" observedRunningTime="2025-10-02 18:40:31.645799814 +0000 UTC m=+1188.615242676" watchObservedRunningTime="2025-10-02 18:40:31.67147957 +0000 UTC m=+1188.640922452" Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.683513 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" podStartSLOduration=17.683487248 podStartE2EDuration="17.683487248s" podCreationTimestamp="2025-10-02 18:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:40:31.668890669 +0000 UTC m=+1188.638333561" watchObservedRunningTime="2025-10-02 18:40:31.683487248 +0000 UTC m=+1188.652930120" Oct 02 18:40:31 crc kubenswrapper[4832]: I1002 18:40:31.838913 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:32 crc kubenswrapper[4832]: I1002 18:40:32.477440 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:32 crc kubenswrapper[4832]: I1002 18:40:32.478823 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:40:32 crc kubenswrapper[4832]: E1002 18:40:32.663308 4832 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.180:37732->38.102.83.180:36377: write tcp 38.102.83.180:37732->38.102.83.180:36377: write: broken pipe Oct 02 18:40:33 crc kubenswrapper[4832]: I1002 18:40:33.488186 4832 generic.go:334] "Generic (PLEG): container finished" podID="3e9a3d78-f055-43d2-9d21-579d4a611d49" containerID="c5266726375613cc8f5b7a69c1278223c302467e28fd41a49193c486221d639d" exitCode=0 Oct 02 18:40:33 crc kubenswrapper[4832]: I1002 18:40:33.488398 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3e9a3d78-f055-43d2-9d21-579d4a611d49","Type":"ContainerDied","Data":"c5266726375613cc8f5b7a69c1278223c302467e28fd41a49193c486221d639d"} Oct 02 18:40:34 crc kubenswrapper[4832]: I1002 18:40:34.335785 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:34 crc kubenswrapper[4832]: I1002 18:40:34.371300 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 02 18:40:34 crc kubenswrapper[4832]: I1002 18:40:34.371382 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 02 18:40:34 crc kubenswrapper[4832]: I1002 18:40:34.414987 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:34 crc kubenswrapper[4832]: I1002 18:40:34.504564 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"3e9a3d78-f055-43d2-9d21-579d4a611d49","Type":"ContainerStarted","Data":"d0ed21daf428d87e54ec7466b03b7ab934f6dfeba6a770b1733b8f43732ee441"} Oct 02 18:40:34 crc kubenswrapper[4832]: I1002 18:40:34.504963 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:34 crc kubenswrapper[4832]: I1002 18:40:34.561913 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 02 18:40:34 crc kubenswrapper[4832]: I1002 18:40:34.584014 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.797749479 podStartE2EDuration="41.583994301s" podCreationTimestamp="2025-10-02 18:39:53 +0000 UTC" firstStartedPulling="2025-10-02 18:40:09.173926971 +0000 UTC m=+1166.143369843" lastFinishedPulling="2025-10-02 18:40:24.960171793 +0000 UTC m=+1181.929614665" observedRunningTime="2025-10-02 18:40:34.532641319 +0000 UTC m=+1191.502084201" watchObservedRunningTime="2025-10-02 18:40:34.583994301 +0000 UTC m=+1191.553437183" Oct 02 18:40:34 crc kubenswrapper[4832]: I1002 18:40:34.732538 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 02 18:40:34 crc kubenswrapper[4832]: I1002 18:40:34.732607 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 02 18:40:34 crc kubenswrapper[4832]: I1002 18:40:34.838250 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:34 crc kubenswrapper[4832]: I1002 18:40:34.884438 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.566602 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.765729 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 02 18:40:35 crc kubenswrapper[4832]: E1002 18:40:35.766241 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="271aa011-a53f-4340-be88-34fb8b95a78b" containerName="init" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.766281 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="271aa011-a53f-4340-be88-34fb8b95a78b" containerName="init" Oct 02 18:40:35 crc kubenswrapper[4832]: E1002 18:40:35.766306 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1" containerName="dnsmasq-dns" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.766316 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1" containerName="dnsmasq-dns" Oct 02 18:40:35 crc kubenswrapper[4832]: E1002 18:40:35.766338 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="271aa011-a53f-4340-be88-34fb8b95a78b" containerName="dnsmasq-dns" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.766349 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="271aa011-a53f-4340-be88-34fb8b95a78b" containerName="dnsmasq-dns" Oct 02 18:40:35 crc kubenswrapper[4832]: E1002 18:40:35.766385 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1" containerName="init" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.766393 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1" containerName="init" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.766626 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="271aa011-a53f-4340-be88-34fb8b95a78b" containerName="dnsmasq-dns" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.766658 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc9d6ca-a3c7-40fa-b0a6-c78a55fdf9c1" containerName="dnsmasq-dns" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.768048 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.773634 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.773826 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.773924 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.778940 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.792768 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-mrzk7" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.943188 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/85cf9359-d7f1-4634-9421-0dffdfb488e0-scripts\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.943485 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cf9359-d7f1-4634-9421-0dffdfb488e0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.943547 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnqwp\" (UniqueName: \"kubernetes.io/projected/85cf9359-d7f1-4634-9421-0dffdfb488e0-kube-api-access-lnqwp\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.943724 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/85cf9359-d7f1-4634-9421-0dffdfb488e0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.943821 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cf9359-d7f1-4634-9421-0dffdfb488e0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.943897 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85cf9359-d7f1-4634-9421-0dffdfb488e0-config\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:35 crc kubenswrapper[4832]: I1002 18:40:35.943945 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cf9359-d7f1-4634-9421-0dffdfb488e0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.045716 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnqwp\" (UniqueName: \"kubernetes.io/projected/85cf9359-d7f1-4634-9421-0dffdfb488e0-kube-api-access-lnqwp\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.045789 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/85cf9359-d7f1-4634-9421-0dffdfb488e0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.045823 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cf9359-d7f1-4634-9421-0dffdfb488e0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.045878 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85cf9359-d7f1-4634-9421-0dffdfb488e0-config\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.045915 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cf9359-d7f1-4634-9421-0dffdfb488e0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.045949 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/85cf9359-d7f1-4634-9421-0dffdfb488e0-scripts\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.045985 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cf9359-d7f1-4634-9421-0dffdfb488e0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.046495 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/85cf9359-d7f1-4634-9421-0dffdfb488e0-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.047155 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85cf9359-d7f1-4634-9421-0dffdfb488e0-config\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.047410 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/85cf9359-d7f1-4634-9421-0dffdfb488e0-scripts\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.052925 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cf9359-d7f1-4634-9421-0dffdfb488e0-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.054361 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cf9359-d7f1-4634-9421-0dffdfb488e0-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.061137 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cf9359-d7f1-4634-9421-0dffdfb488e0-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.067190 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnqwp\" (UniqueName: \"kubernetes.io/projected/85cf9359-d7f1-4634-9421-0dffdfb488e0-kube-api-access-lnqwp\") pod \"ovn-northd-0\" (UID: \"85cf9359-d7f1-4634-9421-0dffdfb488e0\") " pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.090751 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.551921 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.615036 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="d6c6d1dc-36df-4b33-8d10-dde52bd65630" containerName="galera" probeResult="failure" output=< Oct 02 18:40:36 crc kubenswrapper[4832]: wsrep_local_state_comment (Joined) differs from Synced Oct 02 18:40:36 crc kubenswrapper[4832]: > Oct 02 18:40:36 crc kubenswrapper[4832]: I1002 18:40:36.660638 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 02 18:40:36 crc kubenswrapper[4832]: W1002 18:40:36.670632 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85cf9359_d7f1_4634_9421_0dffdfb488e0.slice/crio-e870723e9a6623319dbc97d0cda08ee0cd35a0eddfeb167beefc0b4c80b26cd1 WatchSource:0}: Error finding container e870723e9a6623319dbc97d0cda08ee0cd35a0eddfeb167beefc0b4c80b26cd1: Status 404 returned error can't find the container with id e870723e9a6623319dbc97d0cda08ee0cd35a0eddfeb167beefc0b4c80b26cd1 Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.317084 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bvbpc"] Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.317661 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" podUID="1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4" containerName="dnsmasq-dns" containerID="cri-o://f6f3fb37f75bc84037f65fe8c1ec53e19d3b79b56db59dff906b41f955caab74" gracePeriod=10 Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.323484 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.323755 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.347375 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-bvkbq"] Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.349159 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.401534 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-bvkbq"] Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.474594 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-config\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.474650 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-dns-svc\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.474669 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.474719 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.474737 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbk9b\" (UniqueName: \"kubernetes.io/projected/d420d739-ca22-4260-b5ca-75b21d89248b-kube-api-access-zbk9b\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.527975 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"85cf9359-d7f1-4634-9421-0dffdfb488e0","Type":"ContainerStarted","Data":"e870723e9a6623319dbc97d0cda08ee0cd35a0eddfeb167beefc0b4c80b26cd1"} Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.576623 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-dns-svc\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.576658 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.576712 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.576730 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbk9b\" (UniqueName: \"kubernetes.io/projected/d420d739-ca22-4260-b5ca-75b21d89248b-kube-api-access-zbk9b\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.576846 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-config\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.577685 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-config\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.578205 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-dns-svc\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.578808 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.579753 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.600281 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbk9b\" (UniqueName: \"kubernetes.io/projected/d420d739-ca22-4260-b5ca-75b21d89248b-kube-api-access-zbk9b\") pod \"dnsmasq-dns-698758b865-bvkbq\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:37 crc kubenswrapper[4832]: I1002 18:40:37.689965 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.165124 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-bvkbq"] Oct 02 18:40:38 crc kubenswrapper[4832]: W1002 18:40:38.170771 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd420d739_ca22_4260_b5ca_75b21d89248b.slice/crio-124f308c3aef9ed48d124866e5d2c3f86f094d0962444fae043c905882c2e0f6 WatchSource:0}: Error finding container 124f308c3aef9ed48d124866e5d2c3f86f094d0962444fae043c905882c2e0f6: Status 404 returned error can't find the container with id 124f308c3aef9ed48d124866e5d2c3f86f094d0962444fae043c905882c2e0f6 Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.432451 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.492893 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 02 18:40:38 crc kubenswrapper[4832]: E1002 18:40:38.493249 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4" containerName="dnsmasq-dns" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.493883 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4" containerName="dnsmasq-dns" Oct 02 18:40:38 crc kubenswrapper[4832]: E1002 18:40:38.493959 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4" containerName="init" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.493967 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4" containerName="init" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.494496 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4" containerName="dnsmasq-dns" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.499181 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-kube-api-access-zsdfl\") pod \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.499436 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-config\") pod \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.499683 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-ovsdbserver-nb\") pod \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.499725 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-dns-svc\") pod \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\" (UID: \"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4\") " Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.502433 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.507800 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-kube-api-access-zsdfl" (OuterVolumeSpecName: "kube-api-access-zsdfl") pod "1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4" (UID: "1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4"). InnerVolumeSpecName "kube-api-access-zsdfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.508699 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.508973 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.513908 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.520089 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-qt2sf" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.520223 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.556506 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-bvkbq" event={"ID":"d420d739-ca22-4260-b5ca-75b21d89248b","Type":"ContainerStarted","Data":"124f308c3aef9ed48d124866e5d2c3f86f094d0962444fae043c905882c2e0f6"} Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.558210 4832 generic.go:334] "Generic (PLEG): container finished" podID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerID="05fddf18aa25dc2b83a394bc823c1a48f99b5985179b5382fdcf393572c7197a" exitCode=0 Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.558289 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"536c7c21-106b-48f8-9238-37b85edbf5f2","Type":"ContainerDied","Data":"05fddf18aa25dc2b83a394bc823c1a48f99b5985179b5382fdcf393572c7197a"} Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.560639 4832 generic.go:334] "Generic (PLEG): container finished" podID="1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4" containerID="f6f3fb37f75bc84037f65fe8c1ec53e19d3b79b56db59dff906b41f955caab74" exitCode=0 Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.560671 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" event={"ID":"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4","Type":"ContainerDied","Data":"f6f3fb37f75bc84037f65fe8c1ec53e19d3b79b56db59dff906b41f955caab74"} Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.560691 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" event={"ID":"1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4","Type":"ContainerDied","Data":"bca1eee8ca7dd952339ea080269c3d4c41653383fa30e077ebaf97755827f096"} Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.560706 4832 scope.go:117] "RemoveContainer" containerID="f6f3fb37f75bc84037f65fe8c1ec53e19d3b79b56db59dff906b41f955caab74" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.560800 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-bvbpc" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.572413 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4" (UID: "1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.575464 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4" (UID: "1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.612854 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-config" (OuterVolumeSpecName: "config") pod "1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4" (UID: "1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.612919 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mfv\" (UniqueName: \"kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-kube-api-access-w8mfv\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.613068 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df7b8400-95d5-481a-a9a1-d5b2586f159f-lock\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.613309 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.613397 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.613677 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df7b8400-95d5-481a-a9a1-d5b2586f159f-cache\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.613822 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.613839 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.613852 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.613864 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsdfl\" (UniqueName: \"kubernetes.io/projected/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4-kube-api-access-zsdfl\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.691042 4832 scope.go:117] "RemoveContainer" containerID="ba86701e0fa7dde9f58a4d247d85d321138386cdc7ff8b4b4076ad4a13b18af1" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.715946 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8mfv\" (UniqueName: \"kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-kube-api-access-w8mfv\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.716034 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df7b8400-95d5-481a-a9a1-d5b2586f159f-lock\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.716136 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.716173 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.716353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df7b8400-95d5-481a-a9a1-d5b2586f159f-cache\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.716633 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/df7b8400-95d5-481a-a9a1-d5b2586f159f-lock\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.716671 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.716713 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/df7b8400-95d5-481a-a9a1-d5b2586f159f-cache\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: E1002 18:40:38.716810 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 02 18:40:38 crc kubenswrapper[4832]: E1002 18:40:38.716833 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 02 18:40:38 crc kubenswrapper[4832]: E1002 18:40:38.716877 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift podName:df7b8400-95d5-481a-a9a1-d5b2586f159f nodeName:}" failed. No retries permitted until 2025-10-02 18:40:39.216858976 +0000 UTC m=+1196.186301848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift") pod "swift-storage-0" (UID: "df7b8400-95d5-481a-a9a1-d5b2586f159f") : configmap "swift-ring-files" not found Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.726671 4832 scope.go:117] "RemoveContainer" containerID="f6f3fb37f75bc84037f65fe8c1ec53e19d3b79b56db59dff906b41f955caab74" Oct 02 18:40:38 crc kubenswrapper[4832]: E1002 18:40:38.729377 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6f3fb37f75bc84037f65fe8c1ec53e19d3b79b56db59dff906b41f955caab74\": container with ID starting with f6f3fb37f75bc84037f65fe8c1ec53e19d3b79b56db59dff906b41f955caab74 not found: ID does not exist" containerID="f6f3fb37f75bc84037f65fe8c1ec53e19d3b79b56db59dff906b41f955caab74" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.729417 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6f3fb37f75bc84037f65fe8c1ec53e19d3b79b56db59dff906b41f955caab74"} err="failed to get container status \"f6f3fb37f75bc84037f65fe8c1ec53e19d3b79b56db59dff906b41f955caab74\": rpc error: code = NotFound desc = could not find container \"f6f3fb37f75bc84037f65fe8c1ec53e19d3b79b56db59dff906b41f955caab74\": container with ID starting with f6f3fb37f75bc84037f65fe8c1ec53e19d3b79b56db59dff906b41f955caab74 not found: ID does not exist" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.729447 4832 scope.go:117] "RemoveContainer" containerID="ba86701e0fa7dde9f58a4d247d85d321138386cdc7ff8b4b4076ad4a13b18af1" Oct 02 18:40:38 crc kubenswrapper[4832]: E1002 18:40:38.733057 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba86701e0fa7dde9f58a4d247d85d321138386cdc7ff8b4b4076ad4a13b18af1\": container with ID starting with ba86701e0fa7dde9f58a4d247d85d321138386cdc7ff8b4b4076ad4a13b18af1 not found: ID does not exist" containerID="ba86701e0fa7dde9f58a4d247d85d321138386cdc7ff8b4b4076ad4a13b18af1" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.733097 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba86701e0fa7dde9f58a4d247d85d321138386cdc7ff8b4b4076ad4a13b18af1"} err="failed to get container status \"ba86701e0fa7dde9f58a4d247d85d321138386cdc7ff8b4b4076ad4a13b18af1\": rpc error: code = NotFound desc = could not find container \"ba86701e0fa7dde9f58a4d247d85d321138386cdc7ff8b4b4076ad4a13b18af1\": container with ID starting with ba86701e0fa7dde9f58a4d247d85d321138386cdc7ff8b4b4076ad4a13b18af1 not found: ID does not exist" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.745168 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8mfv\" (UniqueName: \"kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-kube-api-access-w8mfv\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.788486 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:38 crc kubenswrapper[4832]: I1002 18:40:38.994386 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bvbpc"] Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.009518 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-bvbpc"] Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.046114 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-d85cq"] Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.047556 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.049784 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.051998 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.052163 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.062909 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-d85cq"] Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.096668 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-d85cq"] Oct 02 18:40:39 crc kubenswrapper[4832]: E1002 18:40:39.097438 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-s2w8j ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-d85cq" podUID="5d4229c7-c28d-4678-a474-e45006cc84f7" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.105974 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7zhzt"] Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.107288 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.121226 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7zhzt"] Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.128396 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-combined-ca-bundle\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.128452 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d4229c7-c28d-4678-a474-e45006cc84f7-etc-swift\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.128532 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d4229c7-c28d-4678-a474-e45006cc84f7-scripts\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.128557 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-dispersionconf\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.128645 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2w8j\" (UniqueName: \"kubernetes.io/projected/5d4229c7-c28d-4678-a474-e45006cc84f7-kube-api-access-s2w8j\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.128730 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d4229c7-c28d-4678-a474-e45006cc84f7-ring-data-devices\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.128873 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-swiftconf\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230114 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230190 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-combined-ca-bundle\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230225 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d4229c7-c28d-4678-a474-e45006cc84f7-etc-swift\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230287 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-combined-ca-bundle\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230306 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-ring-data-devices\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230324 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-dispersionconf\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230351 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gngqr\" (UniqueName: \"kubernetes.io/projected/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-kube-api-access-gngqr\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230382 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d4229c7-c28d-4678-a474-e45006cc84f7-scripts\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230398 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-scripts\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230415 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-etc-swift\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230430 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-swiftconf\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230451 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-dispersionconf\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230471 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2w8j\" (UniqueName: \"kubernetes.io/projected/5d4229c7-c28d-4678-a474-e45006cc84f7-kube-api-access-s2w8j\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230496 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d4229c7-c28d-4678-a474-e45006cc84f7-ring-data-devices\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.230536 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-swiftconf\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: E1002 18:40:39.231109 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 02 18:40:39 crc kubenswrapper[4832]: E1002 18:40:39.231127 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 02 18:40:39 crc kubenswrapper[4832]: E1002 18:40:39.231167 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift podName:df7b8400-95d5-481a-a9a1-d5b2586f159f nodeName:}" failed. No retries permitted until 2025-10-02 18:40:40.231152453 +0000 UTC m=+1197.200595325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift") pod "swift-storage-0" (UID: "df7b8400-95d5-481a-a9a1-d5b2586f159f") : configmap "swift-ring-files" not found Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.231519 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d4229c7-c28d-4678-a474-e45006cc84f7-etc-swift\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.232100 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d4229c7-c28d-4678-a474-e45006cc84f7-ring-data-devices\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.232498 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d4229c7-c28d-4678-a474-e45006cc84f7-scripts\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.233608 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-swiftconf\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.234009 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-combined-ca-bundle\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.235827 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-dispersionconf\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.236347 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4" path="/var/lib/kubelet/pods/1cdfa6d5-5b7f-4f34-8fbd-cdff31666de4/volumes" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.249160 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2w8j\" (UniqueName: \"kubernetes.io/projected/5d4229c7-c28d-4678-a474-e45006cc84f7-kube-api-access-s2w8j\") pod \"swift-ring-rebalance-d85cq\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.331993 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-combined-ca-bundle\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.332036 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-ring-data-devices\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.332059 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-dispersionconf\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.332086 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gngqr\" (UniqueName: \"kubernetes.io/projected/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-kube-api-access-gngqr\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.332152 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-scripts\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.332170 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-etc-swift\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.332187 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-swiftconf\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.332818 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-etc-swift\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.333884 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-ring-data-devices\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.334308 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-scripts\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.337015 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-dispersionconf\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.337435 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-swiftconf\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.337818 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-combined-ca-bundle\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.350326 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gngqr\" (UniqueName: \"kubernetes.io/projected/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-kube-api-access-gngqr\") pod \"swift-ring-rebalance-7zhzt\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.425388 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.576783 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.588191 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.643707 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-swiftconf\") pod \"5d4229c7-c28d-4678-a474-e45006cc84f7\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.643872 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-combined-ca-bundle\") pod \"5d4229c7-c28d-4678-a474-e45006cc84f7\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.643914 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d4229c7-c28d-4678-a474-e45006cc84f7-ring-data-devices\") pod \"5d4229c7-c28d-4678-a474-e45006cc84f7\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.643959 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2w8j\" (UniqueName: \"kubernetes.io/projected/5d4229c7-c28d-4678-a474-e45006cc84f7-kube-api-access-s2w8j\") pod \"5d4229c7-c28d-4678-a474-e45006cc84f7\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.644055 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d4229c7-c28d-4678-a474-e45006cc84f7-etc-swift\") pod \"5d4229c7-c28d-4678-a474-e45006cc84f7\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.644091 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d4229c7-c28d-4678-a474-e45006cc84f7-scripts\") pod \"5d4229c7-c28d-4678-a474-e45006cc84f7\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.644157 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-dispersionconf\") pod \"5d4229c7-c28d-4678-a474-e45006cc84f7\" (UID: \"5d4229c7-c28d-4678-a474-e45006cc84f7\") " Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.646531 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4229c7-c28d-4678-a474-e45006cc84f7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5d4229c7-c28d-4678-a474-e45006cc84f7" (UID: "5d4229c7-c28d-4678-a474-e45006cc84f7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.646818 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d4229c7-c28d-4678-a474-e45006cc84f7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5d4229c7-c28d-4678-a474-e45006cc84f7" (UID: "5d4229c7-c28d-4678-a474-e45006cc84f7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.647475 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4229c7-c28d-4678-a474-e45006cc84f7-scripts" (OuterVolumeSpecName: "scripts") pod "5d4229c7-c28d-4678-a474-e45006cc84f7" (UID: "5d4229c7-c28d-4678-a474-e45006cc84f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.654882 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5d4229c7-c28d-4678-a474-e45006cc84f7" (UID: "5d4229c7-c28d-4678-a474-e45006cc84f7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.655767 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d4229c7-c28d-4678-a474-e45006cc84f7" (UID: "5d4229c7-c28d-4678-a474-e45006cc84f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.659427 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4229c7-c28d-4678-a474-e45006cc84f7-kube-api-access-s2w8j" (OuterVolumeSpecName: "kube-api-access-s2w8j") pod "5d4229c7-c28d-4678-a474-e45006cc84f7" (UID: "5d4229c7-c28d-4678-a474-e45006cc84f7"). InnerVolumeSpecName "kube-api-access-s2w8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.680574 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5d4229c7-c28d-4678-a474-e45006cc84f7" (UID: "5d4229c7-c28d-4678-a474-e45006cc84f7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.715436 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.746954 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.747000 4832 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5d4229c7-c28d-4678-a474-e45006cc84f7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.747013 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2w8j\" (UniqueName: \"kubernetes.io/projected/5d4229c7-c28d-4678-a474-e45006cc84f7-kube-api-access-s2w8j\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.747028 4832 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5d4229c7-c28d-4678-a474-e45006cc84f7-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.747040 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d4229c7-c28d-4678-a474-e45006cc84f7-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.747051 4832 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.747063 4832 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5d4229c7-c28d-4678-a474-e45006cc84f7-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:39 crc kubenswrapper[4832]: I1002 18:40:39.938478 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-7zhzt"] Oct 02 18:40:40 crc kubenswrapper[4832]: I1002 18:40:40.277622 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:40 crc kubenswrapper[4832]: E1002 18:40:40.277814 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 02 18:40:40 crc kubenswrapper[4832]: E1002 18:40:40.277843 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 02 18:40:40 crc kubenswrapper[4832]: E1002 18:40:40.277900 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift podName:df7b8400-95d5-481a-a9a1-d5b2586f159f nodeName:}" failed. No retries permitted until 2025-10-02 18:40:42.277880696 +0000 UTC m=+1199.247323568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift") pod "swift-storage-0" (UID: "df7b8400-95d5-481a-a9a1-d5b2586f159f") : configmap "swift-ring-files" not found Oct 02 18:40:40 crc kubenswrapper[4832]: W1002 18:40:40.295235 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f58f07d_fb3b_4be8_a9b0_221aa5c01316.slice/crio-d485a83750bdcd35f4294d67192dbacc631124805032566741b150e62c2a04fd WatchSource:0}: Error finding container d485a83750bdcd35f4294d67192dbacc631124805032566741b150e62c2a04fd: Status 404 returned error can't find the container with id d485a83750bdcd35f4294d67192dbacc631124805032566741b150e62c2a04fd Oct 02 18:40:40 crc kubenswrapper[4832]: I1002 18:40:40.595865 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7zhzt" event={"ID":"3f58f07d-fb3b-4be8-a9b0-221aa5c01316","Type":"ContainerStarted","Data":"d485a83750bdcd35f4294d67192dbacc631124805032566741b150e62c2a04fd"} Oct 02 18:40:40 crc kubenswrapper[4832]: I1002 18:40:40.602612 4832 generic.go:334] "Generic (PLEG): container finished" podID="d420d739-ca22-4260-b5ca-75b21d89248b" containerID="1f9a095214cd1030e8ae9292f31240f9e6e829633daca43acb7e8bc9c602ae01" exitCode=0 Oct 02 18:40:40 crc kubenswrapper[4832]: I1002 18:40:40.602714 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-d85cq" Oct 02 18:40:40 crc kubenswrapper[4832]: I1002 18:40:40.602914 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-bvkbq" event={"ID":"d420d739-ca22-4260-b5ca-75b21d89248b","Type":"ContainerDied","Data":"1f9a095214cd1030e8ae9292f31240f9e6e829633daca43acb7e8bc9c602ae01"} Oct 02 18:40:40 crc kubenswrapper[4832]: I1002 18:40:40.749228 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-d85cq"] Oct 02 18:40:40 crc kubenswrapper[4832]: I1002 18:40:40.751245 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-d85cq"] Oct 02 18:40:41 crc kubenswrapper[4832]: I1002 18:40:41.248096 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d4229c7-c28d-4678-a474-e45006cc84f7" path="/var/lib/kubelet/pods/5d4229c7-c28d-4678-a474-e45006cc84f7/volumes" Oct 02 18:40:41 crc kubenswrapper[4832]: I1002 18:40:41.641446 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"85cf9359-d7f1-4634-9421-0dffdfb488e0","Type":"ContainerStarted","Data":"db83d53694400533ed0376581565eb944dcc1dfecf3752edd4b36fe27f14eb64"} Oct 02 18:40:41 crc kubenswrapper[4832]: I1002 18:40:41.644088 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-bvkbq" event={"ID":"d420d739-ca22-4260-b5ca-75b21d89248b","Type":"ContainerStarted","Data":"e22e6bb8f21faaa9e5928a7da1d104f929bc1fa75a15302672b4a0eb4bca0e0e"} Oct 02 18:40:41 crc kubenswrapper[4832]: I1002 18:40:41.645189 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:42 crc kubenswrapper[4832]: I1002 18:40:42.335119 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:42 crc kubenswrapper[4832]: E1002 18:40:42.337322 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 02 18:40:42 crc kubenswrapper[4832]: E1002 18:40:42.337359 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 02 18:40:42 crc kubenswrapper[4832]: E1002 18:40:42.337418 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift podName:df7b8400-95d5-481a-a9a1-d5b2586f159f nodeName:}" failed. No retries permitted until 2025-10-02 18:40:46.337397646 +0000 UTC m=+1203.306840528 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift") pod "swift-storage-0" (UID: "df7b8400-95d5-481a-a9a1-d5b2586f159f") : configmap "swift-ring-files" not found Oct 02 18:40:42 crc kubenswrapper[4832]: I1002 18:40:42.656709 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6trqf" event={"ID":"3533b085-2264-41c9-8feb-d8c6f40fa6c1","Type":"ContainerStarted","Data":"914f523183f72a85094ded78ea80bd60f59e649ca2f788b003f60f501e8d56f7"} Oct 02 18:40:42 crc kubenswrapper[4832]: I1002 18:40:42.657168 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-6trqf" Oct 02 18:40:42 crc kubenswrapper[4832]: I1002 18:40:42.674792 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"85cf9359-d7f1-4634-9421-0dffdfb488e0","Type":"ContainerStarted","Data":"7f4aacb1e46a1d4a63e04c62ec92bf22aa075491658f2a5502e8ed38142a2cf0"} Oct 02 18:40:42 crc kubenswrapper[4832]: I1002 18:40:42.674866 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 02 18:40:42 crc kubenswrapper[4832]: I1002 18:40:42.684104 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-bvkbq" podStartSLOduration=5.68408678 podStartE2EDuration="5.68408678s" podCreationTimestamp="2025-10-02 18:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:40:41.675682591 +0000 UTC m=+1198.645125463" watchObservedRunningTime="2025-10-02 18:40:42.68408678 +0000 UTC m=+1199.653529652" Oct 02 18:40:42 crc kubenswrapper[4832]: I1002 18:40:42.691127 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6trqf" podStartSLOduration=10.036610956 podStartE2EDuration="42.691110021s" podCreationTimestamp="2025-10-02 18:40:00 +0000 UTC" firstStartedPulling="2025-10-02 18:40:08.516426868 +0000 UTC m=+1165.485869740" lastFinishedPulling="2025-10-02 18:40:41.170925933 +0000 UTC m=+1198.140368805" observedRunningTime="2025-10-02 18:40:42.681679984 +0000 UTC m=+1199.651122856" watchObservedRunningTime="2025-10-02 18:40:42.691110021 +0000 UTC m=+1199.660552893" Oct 02 18:40:42 crc kubenswrapper[4832]: I1002 18:40:42.710176 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.209598741 podStartE2EDuration="7.710157979s" podCreationTimestamp="2025-10-02 18:40:35 +0000 UTC" firstStartedPulling="2025-10-02 18:40:36.673336948 +0000 UTC m=+1193.642779810" lastFinishedPulling="2025-10-02 18:40:41.173896176 +0000 UTC m=+1198.143339048" observedRunningTime="2025-10-02 18:40:42.700364691 +0000 UTC m=+1199.669807563" watchObservedRunningTime="2025-10-02 18:40:42.710157979 +0000 UTC m=+1199.679600861" Oct 02 18:40:44 crc kubenswrapper[4832]: I1002 18:40:44.278649 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-cf69dd54d-z6zmc" podUID="ea039358-89f3-4cab-a81f-77dbdbd6e667" containerName="console" containerID="cri-o://451c4ed9951e1b81154922c3a20b1d06a62b8589f785cc1f722a5941ad65fe55" gracePeriod=15 Oct 02 18:40:44 crc kubenswrapper[4832]: I1002 18:40:44.448636 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 02 18:40:44 crc kubenswrapper[4832]: I1002 18:40:44.703475 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-cf69dd54d-z6zmc_ea039358-89f3-4cab-a81f-77dbdbd6e667/console/0.log" Oct 02 18:40:44 crc kubenswrapper[4832]: I1002 18:40:44.703523 4832 generic.go:334] "Generic (PLEG): container finished" podID="ea039358-89f3-4cab-a81f-77dbdbd6e667" containerID="451c4ed9951e1b81154922c3a20b1d06a62b8589f785cc1f722a5941ad65fe55" exitCode=2 Oct 02 18:40:44 crc kubenswrapper[4832]: I1002 18:40:44.703559 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cf69dd54d-z6zmc" event={"ID":"ea039358-89f3-4cab-a81f-77dbdbd6e667","Type":"ContainerDied","Data":"451c4ed9951e1b81154922c3a20b1d06a62b8589f785cc1f722a5941ad65fe55"} Oct 02 18:40:44 crc kubenswrapper[4832]: I1002 18:40:44.728245 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 02 18:40:44 crc kubenswrapper[4832]: I1002 18:40:44.798021 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="3e9a3d78-f055-43d2-9d21-579d4a611d49" containerName="galera" probeResult="failure" output=< Oct 02 18:40:44 crc kubenswrapper[4832]: wsrep_local_state_comment (Joined) differs from Synced Oct 02 18:40:44 crc kubenswrapper[4832]: > Oct 02 18:40:44 crc kubenswrapper[4832]: I1002 18:40:44.859824 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 02 18:40:44 crc kubenswrapper[4832]: I1002 18:40:44.886187 4832 patch_prober.go:28] interesting pod/console-cf69dd54d-z6zmc container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.94:8443/health\": dial tcp 10.217.0.94:8443: connect: connection refused" start-of-body= Oct 02 18:40:44 crc kubenswrapper[4832]: I1002 18:40:44.886242 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-cf69dd54d-z6zmc" podUID="ea039358-89f3-4cab-a81f-77dbdbd6e667" containerName="console" probeResult="failure" output="Get \"https://10.217.0.94:8443/health\": dial tcp 10.217.0.94:8443: connect: connection refused" Oct 02 18:40:44 crc kubenswrapper[4832]: I1002 18:40:44.972789 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-9vtdd"] Oct 02 18:40:44 crc kubenswrapper[4832]: I1002 18:40:44.975913 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9vtdd" Oct 02 18:40:45 crc kubenswrapper[4832]: I1002 18:40:45.000361 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9vtdd"] Oct 02 18:40:45 crc kubenswrapper[4832]: I1002 18:40:45.115004 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54k5p\" (UniqueName: \"kubernetes.io/projected/7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f-kube-api-access-54k5p\") pod \"keystone-db-create-9vtdd\" (UID: \"7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f\") " pod="openstack/keystone-db-create-9vtdd" Oct 02 18:40:45 crc kubenswrapper[4832]: I1002 18:40:45.123166 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hrk2h"] Oct 02 18:40:45 crc kubenswrapper[4832]: I1002 18:40:45.124804 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hrk2h" Oct 02 18:40:45 crc kubenswrapper[4832]: I1002 18:40:45.134878 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hrk2h"] Oct 02 18:40:45 crc kubenswrapper[4832]: I1002 18:40:45.216681 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54k5p\" (UniqueName: \"kubernetes.io/projected/7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f-kube-api-access-54k5p\") pod \"keystone-db-create-9vtdd\" (UID: \"7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f\") " pod="openstack/keystone-db-create-9vtdd" Oct 02 18:40:45 crc kubenswrapper[4832]: I1002 18:40:45.216765 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqw9n\" (UniqueName: \"kubernetes.io/projected/ea8be728-da97-4ac8-91ed-f43b4c0b249b-kube-api-access-nqw9n\") pod \"placement-db-create-hrk2h\" (UID: \"ea8be728-da97-4ac8-91ed-f43b4c0b249b\") " pod="openstack/placement-db-create-hrk2h" Oct 02 18:40:45 crc kubenswrapper[4832]: I1002 18:40:45.238926 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54k5p\" (UniqueName: \"kubernetes.io/projected/7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f-kube-api-access-54k5p\") pod \"keystone-db-create-9vtdd\" (UID: \"7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f\") " pod="openstack/keystone-db-create-9vtdd" Oct 02 18:40:45 crc kubenswrapper[4832]: I1002 18:40:45.307191 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9vtdd" Oct 02 18:40:45 crc kubenswrapper[4832]: I1002 18:40:45.318636 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqw9n\" (UniqueName: \"kubernetes.io/projected/ea8be728-da97-4ac8-91ed-f43b4c0b249b-kube-api-access-nqw9n\") pod \"placement-db-create-hrk2h\" (UID: \"ea8be728-da97-4ac8-91ed-f43b4c0b249b\") " pod="openstack/placement-db-create-hrk2h" Oct 02 18:40:45 crc kubenswrapper[4832]: I1002 18:40:45.343167 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqw9n\" (UniqueName: \"kubernetes.io/projected/ea8be728-da97-4ac8-91ed-f43b4c0b249b-kube-api-access-nqw9n\") pod \"placement-db-create-hrk2h\" (UID: \"ea8be728-da97-4ac8-91ed-f43b4c0b249b\") " pod="openstack/placement-db-create-hrk2h" Oct 02 18:40:45 crc kubenswrapper[4832]: I1002 18:40:45.439600 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hrk2h" Oct 02 18:40:45 crc kubenswrapper[4832]: I1002 18:40:45.730561 4832 scope.go:117] "RemoveContainer" containerID="66cabeaf3c4a425df26ab26a42a0732c139e51265a5f4495d7a35d84e6527920" Oct 02 18:40:46 crc kubenswrapper[4832]: I1002 18:40:46.340175 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:46 crc kubenswrapper[4832]: E1002 18:40:46.342719 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 02 18:40:46 crc kubenswrapper[4832]: E1002 18:40:46.342769 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 02 18:40:46 crc kubenswrapper[4832]: E1002 18:40:46.342872 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift podName:df7b8400-95d5-481a-a9a1-d5b2586f159f nodeName:}" failed. No retries permitted until 2025-10-02 18:40:54.342838099 +0000 UTC m=+1211.312281061 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift") pod "swift-storage-0" (UID: "df7b8400-95d5-481a-a9a1-d5b2586f159f") : configmap "swift-ring-files" not found Oct 02 18:40:47 crc kubenswrapper[4832]: I1002 18:40:47.146051 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n7h42"] Oct 02 18:40:47 crc kubenswrapper[4832]: I1002 18:40:47.148845 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-n7h42" Oct 02 18:40:47 crc kubenswrapper[4832]: I1002 18:40:47.154506 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n7h42"] Oct 02 18:40:47 crc kubenswrapper[4832]: I1002 18:40:47.260619 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfrtk\" (UniqueName: \"kubernetes.io/projected/86c035df-1cf1-477d-b195-e9096de5360f-kube-api-access-gfrtk\") pod \"mysqld-exporter-openstack-db-create-n7h42\" (UID: \"86c035df-1cf1-477d-b195-e9096de5360f\") " pod="openstack/mysqld-exporter-openstack-db-create-n7h42" Oct 02 18:40:47 crc kubenswrapper[4832]: I1002 18:40:47.363554 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfrtk\" (UniqueName: \"kubernetes.io/projected/86c035df-1cf1-477d-b195-e9096de5360f-kube-api-access-gfrtk\") pod \"mysqld-exporter-openstack-db-create-n7h42\" (UID: \"86c035df-1cf1-477d-b195-e9096de5360f\") " pod="openstack/mysqld-exporter-openstack-db-create-n7h42" Oct 02 18:40:47 crc kubenswrapper[4832]: I1002 18:40:47.408424 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfrtk\" (UniqueName: \"kubernetes.io/projected/86c035df-1cf1-477d-b195-e9096de5360f-kube-api-access-gfrtk\") pod \"mysqld-exporter-openstack-db-create-n7h42\" (UID: \"86c035df-1cf1-477d-b195-e9096de5360f\") " pod="openstack/mysqld-exporter-openstack-db-create-n7h42" Oct 02 18:40:47 crc kubenswrapper[4832]: I1002 18:40:47.472292 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-n7h42" Oct 02 18:40:47 crc kubenswrapper[4832]: I1002 18:40:47.691469 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:40:47 crc kubenswrapper[4832]: I1002 18:40:47.760564 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hmx56"] Oct 02 18:40:47 crc kubenswrapper[4832]: I1002 18:40:47.761245 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" podUID="1982fd3e-190b-4955-9a76-6a35524fdaa1" containerName="dnsmasq-dns" containerID="cri-o://f8acf50c09acfcd48ae5bd1112be2bdf423dbcb5d28ae7b6140598d03768a67a" gracePeriod=10 Oct 02 18:40:48 crc kubenswrapper[4832]: I1002 18:40:48.768930 4832 generic.go:334] "Generic (PLEG): container finished" podID="1982fd3e-190b-4955-9a76-6a35524fdaa1" containerID="f8acf50c09acfcd48ae5bd1112be2bdf423dbcb5d28ae7b6140598d03768a67a" exitCode=0 Oct 02 18:40:48 crc kubenswrapper[4832]: I1002 18:40:48.769002 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" event={"ID":"1982fd3e-190b-4955-9a76-6a35524fdaa1","Type":"ContainerDied","Data":"f8acf50c09acfcd48ae5bd1112be2bdf423dbcb5d28ae7b6140598d03768a67a"} Oct 02 18:40:50 crc kubenswrapper[4832]: I1002 18:40:50.392108 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jzjbv"] Oct 02 18:40:50 crc kubenswrapper[4832]: I1002 18:40:50.424362 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jzjbv" Oct 02 18:40:50 crc kubenswrapper[4832]: I1002 18:40:50.433974 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jzjbv"] Oct 02 18:40:50 crc kubenswrapper[4832]: I1002 18:40:50.531576 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svh4j\" (UniqueName: \"kubernetes.io/projected/a6e63ccc-48be-4d43-aff6-144ad30107df-kube-api-access-svh4j\") pod \"glance-db-create-jzjbv\" (UID: \"a6e63ccc-48be-4d43-aff6-144ad30107df\") " pod="openstack/glance-db-create-jzjbv" Oct 02 18:40:50 crc kubenswrapper[4832]: I1002 18:40:50.634298 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svh4j\" (UniqueName: \"kubernetes.io/projected/a6e63ccc-48be-4d43-aff6-144ad30107df-kube-api-access-svh4j\") pod \"glance-db-create-jzjbv\" (UID: \"a6e63ccc-48be-4d43-aff6-144ad30107df\") " pod="openstack/glance-db-create-jzjbv" Oct 02 18:40:50 crc kubenswrapper[4832]: I1002 18:40:50.658434 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svh4j\" (UniqueName: \"kubernetes.io/projected/a6e63ccc-48be-4d43-aff6-144ad30107df-kube-api-access-svh4j\") pod \"glance-db-create-jzjbv\" (UID: \"a6e63ccc-48be-4d43-aff6-144ad30107df\") " pod="openstack/glance-db-create-jzjbv" Oct 02 18:40:50 crc kubenswrapper[4832]: I1002 18:40:50.751391 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jzjbv" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.021472 4832 scope.go:117] "RemoveContainer" containerID="0c60d029c80ea515c6b3fbe8091574fcfb84dc1cc2634d34e0b7d0001fce639c" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.110597 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.182370 4832 scope.go:117] "RemoveContainer" containerID="c102b9eed4eb0371c7f99e922211e7b4c5b1988627019f2b919d30c4b9b92d90" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.198032 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.267417 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-ovsdbserver-nb\") pod \"1982fd3e-190b-4955-9a76-6a35524fdaa1\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.267576 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-ovsdbserver-sb\") pod \"1982fd3e-190b-4955-9a76-6a35524fdaa1\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.267635 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4jn2\" (UniqueName: \"kubernetes.io/projected/1982fd3e-190b-4955-9a76-6a35524fdaa1-kube-api-access-f4jn2\") pod \"1982fd3e-190b-4955-9a76-6a35524fdaa1\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.267696 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-dns-svc\") pod \"1982fd3e-190b-4955-9a76-6a35524fdaa1\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.267724 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-config\") pod \"1982fd3e-190b-4955-9a76-6a35524fdaa1\" (UID: \"1982fd3e-190b-4955-9a76-6a35524fdaa1\") " Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.279527 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1982fd3e-190b-4955-9a76-6a35524fdaa1-kube-api-access-f4jn2" (OuterVolumeSpecName: "kube-api-access-f4jn2") pod "1982fd3e-190b-4955-9a76-6a35524fdaa1" (UID: "1982fd3e-190b-4955-9a76-6a35524fdaa1"). InnerVolumeSpecName "kube-api-access-f4jn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.370723 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4jn2\" (UniqueName: \"kubernetes.io/projected/1982fd3e-190b-4955-9a76-6a35524fdaa1-kube-api-access-f4jn2\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.532238 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1982fd3e-190b-4955-9a76-6a35524fdaa1" (UID: "1982fd3e-190b-4955-9a76-6a35524fdaa1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.538342 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-cf69dd54d-z6zmc_ea039358-89f3-4cab-a81f-77dbdbd6e667/console/0.log" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.538783 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.541714 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-config" (OuterVolumeSpecName: "config") pod "1982fd3e-190b-4955-9a76-6a35524fdaa1" (UID: "1982fd3e-190b-4955-9a76-6a35524fdaa1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.577294 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.577322 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.580244 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1982fd3e-190b-4955-9a76-6a35524fdaa1" (UID: "1982fd3e-190b-4955-9a76-6a35524fdaa1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.583146 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1982fd3e-190b-4955-9a76-6a35524fdaa1" (UID: "1982fd3e-190b-4955-9a76-6a35524fdaa1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.678555 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-oauth-config\") pod \"ea039358-89f3-4cab-a81f-77dbdbd6e667\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.678605 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-config\") pod \"ea039358-89f3-4cab-a81f-77dbdbd6e667\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.678653 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-serving-cert\") pod \"ea039358-89f3-4cab-a81f-77dbdbd6e667\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.678690 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-trusted-ca-bundle\") pod \"ea039358-89f3-4cab-a81f-77dbdbd6e667\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.678732 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7275v\" (UniqueName: \"kubernetes.io/projected/ea039358-89f3-4cab-a81f-77dbdbd6e667-kube-api-access-7275v\") pod \"ea039358-89f3-4cab-a81f-77dbdbd6e667\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.678789 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-oauth-serving-cert\") pod \"ea039358-89f3-4cab-a81f-77dbdbd6e667\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.678848 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-service-ca\") pod \"ea039358-89f3-4cab-a81f-77dbdbd6e667\" (UID: \"ea039358-89f3-4cab-a81f-77dbdbd6e667\") " Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.679500 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.679517 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1982fd3e-190b-4955-9a76-6a35524fdaa1-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.679940 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-service-ca" (OuterVolumeSpecName: "service-ca") pod "ea039358-89f3-4cab-a81f-77dbdbd6e667" (UID: "ea039358-89f3-4cab-a81f-77dbdbd6e667"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.679958 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ea039358-89f3-4cab-a81f-77dbdbd6e667" (UID: "ea039358-89f3-4cab-a81f-77dbdbd6e667"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.679985 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-config" (OuterVolumeSpecName: "console-config") pod "ea039358-89f3-4cab-a81f-77dbdbd6e667" (UID: "ea039358-89f3-4cab-a81f-77dbdbd6e667"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.680363 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ea039358-89f3-4cab-a81f-77dbdbd6e667" (UID: "ea039358-89f3-4cab-a81f-77dbdbd6e667"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.683224 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ea039358-89f3-4cab-a81f-77dbdbd6e667" (UID: "ea039358-89f3-4cab-a81f-77dbdbd6e667"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.684036 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ea039358-89f3-4cab-a81f-77dbdbd6e667" (UID: "ea039358-89f3-4cab-a81f-77dbdbd6e667"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.684427 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea039358-89f3-4cab-a81f-77dbdbd6e667-kube-api-access-7275v" (OuterVolumeSpecName: "kube-api-access-7275v") pod "ea039358-89f3-4cab-a81f-77dbdbd6e667" (UID: "ea039358-89f3-4cab-a81f-77dbdbd6e667"). InnerVolumeSpecName "kube-api-access-7275v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.780767 4832 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-service-ca\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.780798 4832 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.780809 4832 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.780820 4832 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea039358-89f3-4cab-a81f-77dbdbd6e667-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.780830 4832 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.780842 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7275v\" (UniqueName: \"kubernetes.io/projected/ea039358-89f3-4cab-a81f-77dbdbd6e667-kube-api-access-7275v\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.780850 4832 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea039358-89f3-4cab-a81f-77dbdbd6e667-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.803299 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" event={"ID":"1982fd3e-190b-4955-9a76-6a35524fdaa1","Type":"ContainerDied","Data":"60bd859e0c54160a9f00b53b6c83608126a02dd839edbf0fd2d035197171dde4"} Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.803362 4832 scope.go:117] "RemoveContainer" containerID="f8acf50c09acfcd48ae5bd1112be2bdf423dbcb5d28ae7b6140598d03768a67a" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.803376 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.805048 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cf69dd54d-z6zmc" event={"ID":"ea039358-89f3-4cab-a81f-77dbdbd6e667","Type":"ContainerDied","Data":"bd40e9a359b9a50745d2cec8ed876a744ea5b4f16f1ae137ac02b8b1e805d9eb"} Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.805131 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cf69dd54d-z6zmc" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.813025 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"536c7c21-106b-48f8-9238-37b85edbf5f2","Type":"ContainerStarted","Data":"e90b5dbe1bd14707ec66e7587df1b454483eb11ca307138d4b652ffd026ac72f"} Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.815196 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7zhzt" event={"ID":"3f58f07d-fb3b-4be8-a9b0-221aa5c01316","Type":"ContainerStarted","Data":"504933bc18cf6aadf6931dd83138dea4bbf1dc196a8c034ac43e1bd93f77fb86"} Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.855946 4832 scope.go:117] "RemoveContainer" containerID="31d84ac5414f729d7195792a78f549256969ca0db2427c712df9be944f7fb4de" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.867744 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-7zhzt" podStartSLOduration=2.11381164 podStartE2EDuration="12.867727128s" podCreationTimestamp="2025-10-02 18:40:39 +0000 UTC" firstStartedPulling="2025-10-02 18:40:40.299163434 +0000 UTC m=+1197.268606306" lastFinishedPulling="2025-10-02 18:40:51.053078912 +0000 UTC m=+1208.022521794" observedRunningTime="2025-10-02 18:40:51.856288239 +0000 UTC m=+1208.825731111" watchObservedRunningTime="2025-10-02 18:40:51.867727128 +0000 UTC m=+1208.837170000" Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.868354 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jzjbv"] Oct 02 18:40:51 crc kubenswrapper[4832]: W1002 18:40:51.900889 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7588bcde_ed1b_4a8b_a8a6_bcbecdb9544f.slice/crio-1ef0af5b5c8ca4069ee5b16625fb915934f18fc4ad882b6f21c674a980f144bd WatchSource:0}: Error finding container 1ef0af5b5c8ca4069ee5b16625fb915934f18fc4ad882b6f21c674a980f144bd: Status 404 returned error can't find the container with id 1ef0af5b5c8ca4069ee5b16625fb915934f18fc4ad882b6f21c674a980f144bd Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.934840 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-cf69dd54d-z6zmc"] Oct 02 18:40:51 crc kubenswrapper[4832]: I1002 18:40:51.992285 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-cf69dd54d-z6zmc"] Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.004327 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-9vtdd"] Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.011209 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hrk2h"] Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.019752 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n7h42"] Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.098298 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hmx56"] Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.121125 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-hmx56"] Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.160431 4832 scope.go:117] "RemoveContainer" containerID="451c4ed9951e1b81154922c3a20b1d06a62b8589f785cc1f722a5941ad65fe55" Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.835002 4832 generic.go:334] "Generic (PLEG): container finished" podID="7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f" containerID="5cb5e8da29f28563cb7b16d35f18f8c8dd125ddaeb9e3da6460f490f095b407d" exitCode=0 Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.835097 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9vtdd" event={"ID":"7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f","Type":"ContainerDied","Data":"5cb5e8da29f28563cb7b16d35f18f8c8dd125ddaeb9e3da6460f490f095b407d"} Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.836277 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9vtdd" event={"ID":"7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f","Type":"ContainerStarted","Data":"1ef0af5b5c8ca4069ee5b16625fb915934f18fc4ad882b6f21c674a980f144bd"} Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.837506 4832 generic.go:334] "Generic (PLEG): container finished" podID="a6e63ccc-48be-4d43-aff6-144ad30107df" containerID="11695547e5932e697581acc6993725cc4cd96da86561b4a213ec627072825d5d" exitCode=0 Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.837570 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jzjbv" event={"ID":"a6e63ccc-48be-4d43-aff6-144ad30107df","Type":"ContainerDied","Data":"11695547e5932e697581acc6993725cc4cd96da86561b4a213ec627072825d5d"} Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.837590 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jzjbv" event={"ID":"a6e63ccc-48be-4d43-aff6-144ad30107df","Type":"ContainerStarted","Data":"8433b5ca0d884476d2636519135663e78981364125d2893c34b78b7783c4c900"} Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.843758 4832 generic.go:334] "Generic (PLEG): container finished" podID="ea8be728-da97-4ac8-91ed-f43b4c0b249b" containerID="2280f581cc933666ee8a671dfecb9c3e3ec1e0f1233070307d2f50a86a009d90" exitCode=0 Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.843805 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hrk2h" event={"ID":"ea8be728-da97-4ac8-91ed-f43b4c0b249b","Type":"ContainerDied","Data":"2280f581cc933666ee8a671dfecb9c3e3ec1e0f1233070307d2f50a86a009d90"} Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.844148 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hrk2h" event={"ID":"ea8be728-da97-4ac8-91ed-f43b4c0b249b","Type":"ContainerStarted","Data":"0c588443b7dd181009410914867d511f56f1cd5871fd29f84637dc7129dec453"} Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.846704 4832 generic.go:334] "Generic (PLEG): container finished" podID="86c035df-1cf1-477d-b195-e9096de5360f" containerID="e2515e5288e3172c916fb56122fe4932316e51c72ece74820b053199b8c2391a" exitCode=0 Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.846756 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-n7h42" event={"ID":"86c035df-1cf1-477d-b195-e9096de5360f","Type":"ContainerDied","Data":"e2515e5288e3172c916fb56122fe4932316e51c72ece74820b053199b8c2391a"} Oct 02 18:40:52 crc kubenswrapper[4832]: I1002 18:40:52.846833 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-n7h42" event={"ID":"86c035df-1cf1-477d-b195-e9096de5360f","Type":"ContainerStarted","Data":"595429689dd1ea8ac1ca8ec38d957d7d47cd533171dfc5f17f77e7da4c73ef95"} Oct 02 18:40:53 crc kubenswrapper[4832]: I1002 18:40:53.241898 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1982fd3e-190b-4955-9a76-6a35524fdaa1" path="/var/lib/kubelet/pods/1982fd3e-190b-4955-9a76-6a35524fdaa1/volumes" Oct 02 18:40:53 crc kubenswrapper[4832]: I1002 18:40:53.242601 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea039358-89f3-4cab-a81f-77dbdbd6e667" path="/var/lib/kubelet/pods/ea039358-89f3-4cab-a81f-77dbdbd6e667/volumes" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.357542 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:40:54 crc kubenswrapper[4832]: E1002 18:40:54.358246 4832 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 02 18:40:54 crc kubenswrapper[4832]: E1002 18:40:54.358353 4832 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 02 18:40:54 crc kubenswrapper[4832]: E1002 18:40:54.358401 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift podName:df7b8400-95d5-481a-a9a1-d5b2586f159f nodeName:}" failed. No retries permitted until 2025-10-02 18:41:10.358383413 +0000 UTC m=+1227.327826365 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift") pod "swift-storage-0" (UID: "df7b8400-95d5-481a-a9a1-d5b2586f159f") : configmap "swift-ring-files" not found Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.584471 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hrk2h" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.665529 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqw9n\" (UniqueName: \"kubernetes.io/projected/ea8be728-da97-4ac8-91ed-f43b4c0b249b-kube-api-access-nqw9n\") pod \"ea8be728-da97-4ac8-91ed-f43b4c0b249b\" (UID: \"ea8be728-da97-4ac8-91ed-f43b4c0b249b\") " Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.679290 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea8be728-da97-4ac8-91ed-f43b4c0b249b-kube-api-access-nqw9n" (OuterVolumeSpecName: "kube-api-access-nqw9n") pod "ea8be728-da97-4ac8-91ed-f43b4c0b249b" (UID: "ea8be728-da97-4ac8-91ed-f43b4c0b249b"). InnerVolumeSpecName "kube-api-access-nqw9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.713155 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-hmx56" podUID="1982fd3e-190b-4955-9a76-6a35524fdaa1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: i/o timeout" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.767661 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqw9n\" (UniqueName: \"kubernetes.io/projected/ea8be728-da97-4ac8-91ed-f43b4c0b249b-kube-api-access-nqw9n\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.838077 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9vtdd" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.848785 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jzjbv" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.856104 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-n7h42" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.889340 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jzjbv" event={"ID":"a6e63ccc-48be-4d43-aff6-144ad30107df","Type":"ContainerDied","Data":"8433b5ca0d884476d2636519135663e78981364125d2893c34b78b7783c4c900"} Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.889379 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8433b5ca0d884476d2636519135663e78981364125d2893c34b78b7783c4c900" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.889428 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jzjbv" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.904202 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hrk2h" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.905099 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hrk2h" event={"ID":"ea8be728-da97-4ac8-91ed-f43b4c0b249b","Type":"ContainerDied","Data":"0c588443b7dd181009410914867d511f56f1cd5871fd29f84637dc7129dec453"} Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.905143 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c588443b7dd181009410914867d511f56f1cd5871fd29f84637dc7129dec453" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.909648 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"536c7c21-106b-48f8-9238-37b85edbf5f2","Type":"ContainerStarted","Data":"49c46f95d81f04ffa7f877971a12b555d7cc740d55c90cdf7d9f8bdfe3cb3a75"} Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.911585 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-n7h42" event={"ID":"86c035df-1cf1-477d-b195-e9096de5360f","Type":"ContainerDied","Data":"595429689dd1ea8ac1ca8ec38d957d7d47cd533171dfc5f17f77e7da4c73ef95"} Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.911611 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="595429689dd1ea8ac1ca8ec38d957d7d47cd533171dfc5f17f77e7da4c73ef95" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.911622 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-n7h42" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.913567 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-9vtdd" event={"ID":"7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f","Type":"ContainerDied","Data":"1ef0af5b5c8ca4069ee5b16625fb915934f18fc4ad882b6f21c674a980f144bd"} Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.913589 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ef0af5b5c8ca4069ee5b16625fb915934f18fc4ad882b6f21c674a980f144bd" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.913636 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-9vtdd" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.987301 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54k5p\" (UniqueName: \"kubernetes.io/projected/7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f-kube-api-access-54k5p\") pod \"7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f\" (UID: \"7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f\") " Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.987348 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfrtk\" (UniqueName: \"kubernetes.io/projected/86c035df-1cf1-477d-b195-e9096de5360f-kube-api-access-gfrtk\") pod \"86c035df-1cf1-477d-b195-e9096de5360f\" (UID: \"86c035df-1cf1-477d-b195-e9096de5360f\") " Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.987581 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svh4j\" (UniqueName: \"kubernetes.io/projected/a6e63ccc-48be-4d43-aff6-144ad30107df-kube-api-access-svh4j\") pod \"a6e63ccc-48be-4d43-aff6-144ad30107df\" (UID: \"a6e63ccc-48be-4d43-aff6-144ad30107df\") " Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.990987 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c035df-1cf1-477d-b195-e9096de5360f-kube-api-access-gfrtk" (OuterVolumeSpecName: "kube-api-access-gfrtk") pod "86c035df-1cf1-477d-b195-e9096de5360f" (UID: "86c035df-1cf1-477d-b195-e9096de5360f"). InnerVolumeSpecName "kube-api-access-gfrtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.991328 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6e63ccc-48be-4d43-aff6-144ad30107df-kube-api-access-svh4j" (OuterVolumeSpecName: "kube-api-access-svh4j") pod "a6e63ccc-48be-4d43-aff6-144ad30107df" (UID: "a6e63ccc-48be-4d43-aff6-144ad30107df"). InnerVolumeSpecName "kube-api-access-svh4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:40:54 crc kubenswrapper[4832]: I1002 18:40:54.991373 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f-kube-api-access-54k5p" (OuterVolumeSpecName: "kube-api-access-54k5p") pod "7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f" (UID: "7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f"). InnerVolumeSpecName "kube-api-access-54k5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:40:55 crc kubenswrapper[4832]: I1002 18:40:55.090055 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svh4j\" (UniqueName: \"kubernetes.io/projected/a6e63ccc-48be-4d43-aff6-144ad30107df-kube-api-access-svh4j\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:55 crc kubenswrapper[4832]: I1002 18:40:55.090085 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54k5p\" (UniqueName: \"kubernetes.io/projected/7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f-kube-api-access-54k5p\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:55 crc kubenswrapper[4832]: I1002 18:40:55.090095 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfrtk\" (UniqueName: \"kubernetes.io/projected/86c035df-1cf1-477d-b195-e9096de5360f-kube-api-access-gfrtk\") on node \"crc\" DevicePath \"\"" Oct 02 18:40:56 crc kubenswrapper[4832]: I1002 18:40:56.876066 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:40:56 crc kubenswrapper[4832]: I1002 18:40:56.876409 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:40:59 crc kubenswrapper[4832]: I1002 18:40:59.977260 4832 generic.go:334] "Generic (PLEG): container finished" podID="c9fd4cd0-fd84-45cb-9c68-0985f52a1054" containerID="aab15a231a3fad3f097a709bc28c3653595cdc6fce81258a72e2be1d69cc979f" exitCode=0 Oct 02 18:40:59 crc kubenswrapper[4832]: I1002 18:40:59.977416 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c9fd4cd0-fd84-45cb-9c68-0985f52a1054","Type":"ContainerDied","Data":"aab15a231a3fad3f097a709bc28c3653595cdc6fce81258a72e2be1d69cc979f"} Oct 02 18:40:59 crc kubenswrapper[4832]: I1002 18:40:59.981502 4832 generic.go:334] "Generic (PLEG): container finished" podID="4ff074fc-c56e-40f3-a327-b829d84c9866" containerID="b21ca3741d0fca37291dae0e91f9ff546ba06e2cd3972007eb0e57226a7359cc" exitCode=0 Oct 02 18:40:59 crc kubenswrapper[4832]: I1002 18:40:59.981551 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4ff074fc-c56e-40f3-a327-b829d84c9866","Type":"ContainerDied","Data":"b21ca3741d0fca37291dae0e91f9ff546ba06e2cd3972007eb0e57226a7359cc"} Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.530778 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-e572-account-create-8jstm"] Oct 02 18:41:00 crc kubenswrapper[4832]: E1002 18:41:00.531650 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f" containerName="mariadb-database-create" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.531679 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f" containerName="mariadb-database-create" Oct 02 18:41:00 crc kubenswrapper[4832]: E1002 18:41:00.531710 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea039358-89f3-4cab-a81f-77dbdbd6e667" containerName="console" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.531719 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea039358-89f3-4cab-a81f-77dbdbd6e667" containerName="console" Oct 02 18:41:00 crc kubenswrapper[4832]: E1002 18:41:00.531727 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6e63ccc-48be-4d43-aff6-144ad30107df" containerName="mariadb-database-create" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.531735 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6e63ccc-48be-4d43-aff6-144ad30107df" containerName="mariadb-database-create" Oct 02 18:41:00 crc kubenswrapper[4832]: E1002 18:41:00.531748 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c035df-1cf1-477d-b195-e9096de5360f" containerName="mariadb-database-create" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.531755 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c035df-1cf1-477d-b195-e9096de5360f" containerName="mariadb-database-create" Oct 02 18:41:00 crc kubenswrapper[4832]: E1002 18:41:00.531770 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1982fd3e-190b-4955-9a76-6a35524fdaa1" containerName="init" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.531777 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1982fd3e-190b-4955-9a76-6a35524fdaa1" containerName="init" Oct 02 18:41:00 crc kubenswrapper[4832]: E1002 18:41:00.531797 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea8be728-da97-4ac8-91ed-f43b4c0b249b" containerName="mariadb-database-create" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.531804 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea8be728-da97-4ac8-91ed-f43b4c0b249b" containerName="mariadb-database-create" Oct 02 18:41:00 crc kubenswrapper[4832]: E1002 18:41:00.531824 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1982fd3e-190b-4955-9a76-6a35524fdaa1" containerName="dnsmasq-dns" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.531831 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1982fd3e-190b-4955-9a76-6a35524fdaa1" containerName="dnsmasq-dns" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.532070 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea039358-89f3-4cab-a81f-77dbdbd6e667" containerName="console" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.532089 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6e63ccc-48be-4d43-aff6-144ad30107df" containerName="mariadb-database-create" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.532100 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c035df-1cf1-477d-b195-e9096de5360f" containerName="mariadb-database-create" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.532126 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f" containerName="mariadb-database-create" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.532142 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea8be728-da97-4ac8-91ed-f43b4c0b249b" containerName="mariadb-database-create" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.532155 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1982fd3e-190b-4955-9a76-6a35524fdaa1" containerName="dnsmasq-dns" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.533076 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e572-account-create-8jstm" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.535196 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.539843 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e572-account-create-8jstm"] Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.566312 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x972p\" (UniqueName: \"kubernetes.io/projected/10a6ce2b-ed5d-4115-9d49-a30fd313044e-kube-api-access-x972p\") pod \"glance-e572-account-create-8jstm\" (UID: \"10a6ce2b-ed5d-4115-9d49-a30fd313044e\") " pod="openstack/glance-e572-account-create-8jstm" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.668322 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x972p\" (UniqueName: \"kubernetes.io/projected/10a6ce2b-ed5d-4115-9d49-a30fd313044e-kube-api-access-x972p\") pod \"glance-e572-account-create-8jstm\" (UID: \"10a6ce2b-ed5d-4115-9d49-a30fd313044e\") " pod="openstack/glance-e572-account-create-8jstm" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.690923 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x972p\" (UniqueName: \"kubernetes.io/projected/10a6ce2b-ed5d-4115-9d49-a30fd313044e-kube-api-access-x972p\") pod \"glance-e572-account-create-8jstm\" (UID: \"10a6ce2b-ed5d-4115-9d49-a30fd313044e\") " pod="openstack/glance-e572-account-create-8jstm" Oct 02 18:41:00 crc kubenswrapper[4832]: I1002 18:41:00.910195 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e572-account-create-8jstm" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.020695 4832 generic.go:334] "Generic (PLEG): container finished" podID="3f58f07d-fb3b-4be8-a9b0-221aa5c01316" containerID="504933bc18cf6aadf6931dd83138dea4bbf1dc196a8c034ac43e1bd93f77fb86" exitCode=0 Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.021141 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7zhzt" event={"ID":"3f58f07d-fb3b-4be8-a9b0-221aa5c01316","Type":"ContainerDied","Data":"504933bc18cf6aadf6931dd83138dea4bbf1dc196a8c034ac43e1bd93f77fb86"} Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.027876 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4ff074fc-c56e-40f3-a327-b829d84c9866","Type":"ContainerStarted","Data":"c42a03d3fac22f9fe9775d8529908cd52a14bdd1194f0e8c257c308ef4cfb443"} Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.028653 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.043384 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c9fd4cd0-fd84-45cb-9c68-0985f52a1054","Type":"ContainerStarted","Data":"f4b873ee8011967884ef7ed7faeabb5852e3b54d5ce9de869f2dc8439b006c3f"} Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.043696 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.053689 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"536c7c21-106b-48f8-9238-37b85edbf5f2","Type":"ContainerStarted","Data":"24d450b2bccfad699fae31292628961de1d2e4b54f528dbff0eae0acf27e007d"} Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.115729 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.136062 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=55.531784724 podStartE2EDuration="1m11.136037092s" podCreationTimestamp="2025-10-02 18:39:50 +0000 UTC" firstStartedPulling="2025-10-02 18:40:07.710808596 +0000 UTC m=+1164.680251468" lastFinishedPulling="2025-10-02 18:40:23.315060954 +0000 UTC m=+1180.284503836" observedRunningTime="2025-10-02 18:41:01.093031702 +0000 UTC m=+1218.062474584" watchObservedRunningTime="2025-10-02 18:41:01.136037092 +0000 UTC m=+1218.105479964" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.145555 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=55.54312993 podStartE2EDuration="1m11.14551525s" podCreationTimestamp="2025-10-02 18:39:50 +0000 UTC" firstStartedPulling="2025-10-02 18:40:07.712526699 +0000 UTC m=+1164.681969571" lastFinishedPulling="2025-10-02 18:40:23.314912019 +0000 UTC m=+1180.284354891" observedRunningTime="2025-10-02 18:41:01.131347235 +0000 UTC m=+1218.100790107" watchObservedRunningTime="2025-10-02 18:41:01.14551525 +0000 UTC m=+1218.114958112" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.152117 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-g6w9z" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.202687 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=13.042521822 podStartE2EDuration="1m4.202667094s" podCreationTimestamp="2025-10-02 18:39:57 +0000 UTC" firstStartedPulling="2025-10-02 18:40:09.144949582 +0000 UTC m=+1166.114392454" lastFinishedPulling="2025-10-02 18:41:00.305094854 +0000 UTC m=+1217.274537726" observedRunningTime="2025-10-02 18:41:01.197046448 +0000 UTC m=+1218.166489330" watchObservedRunningTime="2025-10-02 18:41:01.202667094 +0000 UTC m=+1218.172109966" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.407281 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-e572-account-create-8jstm"] Oct 02 18:41:01 crc kubenswrapper[4832]: W1002 18:41:01.414204 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10a6ce2b_ed5d_4115_9d49_a30fd313044e.slice/crio-192c394a32d0518e62c8b8ae62494fb6ca4d299574d05a0930ceec76b595ef9a WatchSource:0}: Error finding container 192c394a32d0518e62c8b8ae62494fb6ca4d299574d05a0930ceec76b595ef9a: Status 404 returned error can't find the container with id 192c394a32d0518e62c8b8ae62494fb6ca4d299574d05a0930ceec76b595ef9a Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.444210 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6trqf-config-rtw2m"] Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.445877 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.454118 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.492496 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6trqf-config-rtw2m"] Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.586767 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fzrb\" (UniqueName: \"kubernetes.io/projected/c4f4ba95-903f-47e8-8e24-fe0e963b434f-kube-api-access-4fzrb\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.587032 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-log-ovn\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.587053 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-run-ovn\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.587090 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4ba95-903f-47e8-8e24-fe0e963b434f-additional-scripts\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.587118 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4ba95-903f-47e8-8e24-fe0e963b434f-scripts\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.587173 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-run\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.689023 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-log-ovn\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.689067 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-run-ovn\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.689109 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4ba95-903f-47e8-8e24-fe0e963b434f-additional-scripts\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.689140 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4ba95-903f-47e8-8e24-fe0e963b434f-scripts\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.689205 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-run\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.689288 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fzrb\" (UniqueName: \"kubernetes.io/projected/c4f4ba95-903f-47e8-8e24-fe0e963b434f-kube-api-access-4fzrb\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.689377 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-log-ovn\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.690088 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4ba95-903f-47e8-8e24-fe0e963b434f-additional-scripts\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.690137 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-run\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.690152 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-run-ovn\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.691203 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4ba95-903f-47e8-8e24-fe0e963b434f-scripts\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.715465 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fzrb\" (UniqueName: \"kubernetes.io/projected/c4f4ba95-903f-47e8-8e24-fe0e963b434f-kube-api-access-4fzrb\") pod \"ovn-controller-6trqf-config-rtw2m\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:01 crc kubenswrapper[4832]: I1002 18:41:01.840394 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.102768 4832 generic.go:334] "Generic (PLEG): container finished" podID="10a6ce2b-ed5d-4115-9d49-a30fd313044e" containerID="854b736cb4633a488366a6d49a544882ebfff5acc3331f4db92ea942b93aa6a3" exitCode=0 Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.103378 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e572-account-create-8jstm" event={"ID":"10a6ce2b-ed5d-4115-9d49-a30fd313044e","Type":"ContainerDied","Data":"854b736cb4633a488366a6d49a544882ebfff5acc3331f4db92ea942b93aa6a3"} Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.103428 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e572-account-create-8jstm" event={"ID":"10a6ce2b-ed5d-4115-9d49-a30fd313044e","Type":"ContainerStarted","Data":"192c394a32d0518e62c8b8ae62494fb6ca4d299574d05a0930ceec76b595ef9a"} Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.456582 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6trqf-config-rtw2m"] Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.800392 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.817834 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-etc-swift\") pod \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.817908 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-swiftconf\") pod \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.817945 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-scripts\") pod \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.818032 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-ring-data-devices\") pod \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.818050 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-dispersionconf\") pod \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.818077 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gngqr\" (UniqueName: \"kubernetes.io/projected/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-kube-api-access-gngqr\") pod \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.818106 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-combined-ca-bundle\") pod \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\" (UID: \"3f58f07d-fb3b-4be8-a9b0-221aa5c01316\") " Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.820011 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3f58f07d-fb3b-4be8-a9b0-221aa5c01316" (UID: "3f58f07d-fb3b-4be8-a9b0-221aa5c01316"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.820086 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3f58f07d-fb3b-4be8-a9b0-221aa5c01316" (UID: "3f58f07d-fb3b-4be8-a9b0-221aa5c01316"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.828165 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3f58f07d-fb3b-4be8-a9b0-221aa5c01316" (UID: "3f58f07d-fb3b-4be8-a9b0-221aa5c01316"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.843477 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-kube-api-access-gngqr" (OuterVolumeSpecName: "kube-api-access-gngqr") pod "3f58f07d-fb3b-4be8-a9b0-221aa5c01316" (UID: "3f58f07d-fb3b-4be8-a9b0-221aa5c01316"). InnerVolumeSpecName "kube-api-access-gngqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.856762 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f58f07d-fb3b-4be8-a9b0-221aa5c01316" (UID: "3f58f07d-fb3b-4be8-a9b0-221aa5c01316"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.872375 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3f58f07d-fb3b-4be8-a9b0-221aa5c01316" (UID: "3f58f07d-fb3b-4be8-a9b0-221aa5c01316"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.879650 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-scripts" (OuterVolumeSpecName: "scripts") pod "3f58f07d-fb3b-4be8-a9b0-221aa5c01316" (UID: "3f58f07d-fb3b-4be8-a9b0-221aa5c01316"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.921864 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gngqr\" (UniqueName: \"kubernetes.io/projected/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-kube-api-access-gngqr\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.922218 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.922229 4832 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.922238 4832 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.922252 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.922337 4832 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:02 crc kubenswrapper[4832]: I1002 18:41:02.922346 4832 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f58f07d-fb3b-4be8-a9b0-221aa5c01316-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:03 crc kubenswrapper[4832]: I1002 18:41:03.112107 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6trqf-config-rtw2m" event={"ID":"c4f4ba95-903f-47e8-8e24-fe0e963b434f","Type":"ContainerStarted","Data":"a912356fa8f9d772762e7c377369ee2f6c3e83f78eb8f3dc619a9a4c4fc9c2a0"} Oct 02 18:41:03 crc kubenswrapper[4832]: I1002 18:41:03.112151 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6trqf-config-rtw2m" event={"ID":"c4f4ba95-903f-47e8-8e24-fe0e963b434f","Type":"ContainerStarted","Data":"d92ae05b8f207dc9571699bbcc0d7b5caab5eaed4a92173992b4d8f03a41ec5a"} Oct 02 18:41:03 crc kubenswrapper[4832]: I1002 18:41:03.115073 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-7zhzt" event={"ID":"3f58f07d-fb3b-4be8-a9b0-221aa5c01316","Type":"ContainerDied","Data":"d485a83750bdcd35f4294d67192dbacc631124805032566741b150e62c2a04fd"} Oct 02 18:41:03 crc kubenswrapper[4832]: I1002 18:41:03.115107 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d485a83750bdcd35f4294d67192dbacc631124805032566741b150e62c2a04fd" Oct 02 18:41:03 crc kubenswrapper[4832]: I1002 18:41:03.115142 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7zhzt" Oct 02 18:41:03 crc kubenswrapper[4832]: I1002 18:41:03.152632 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6trqf-config-rtw2m" podStartSLOduration=2.152612844 podStartE2EDuration="2.152612844s" podCreationTimestamp="2025-10-02 18:41:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:41:03.139738961 +0000 UTC m=+1220.109181833" watchObservedRunningTime="2025-10-02 18:41:03.152612844 +0000 UTC m=+1220.122055716" Oct 02 18:41:03 crc kubenswrapper[4832]: I1002 18:41:03.563989 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:03 crc kubenswrapper[4832]: I1002 18:41:03.602320 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e572-account-create-8jstm" Oct 02 18:41:03 crc kubenswrapper[4832]: I1002 18:41:03.738750 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x972p\" (UniqueName: \"kubernetes.io/projected/10a6ce2b-ed5d-4115-9d49-a30fd313044e-kube-api-access-x972p\") pod \"10a6ce2b-ed5d-4115-9d49-a30fd313044e\" (UID: \"10a6ce2b-ed5d-4115-9d49-a30fd313044e\") " Oct 02 18:41:03 crc kubenswrapper[4832]: I1002 18:41:03.749915 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a6ce2b-ed5d-4115-9d49-a30fd313044e-kube-api-access-x972p" (OuterVolumeSpecName: "kube-api-access-x972p") pod "10a6ce2b-ed5d-4115-9d49-a30fd313044e" (UID: "10a6ce2b-ed5d-4115-9d49-a30fd313044e"). InnerVolumeSpecName "kube-api-access-x972p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:03 crc kubenswrapper[4832]: I1002 18:41:03.841950 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x972p\" (UniqueName: \"kubernetes.io/projected/10a6ce2b-ed5d-4115-9d49-a30fd313044e-kube-api-access-x972p\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:04 crc kubenswrapper[4832]: I1002 18:41:04.125763 4832 generic.go:334] "Generic (PLEG): container finished" podID="c4f4ba95-903f-47e8-8e24-fe0e963b434f" containerID="a912356fa8f9d772762e7c377369ee2f6c3e83f78eb8f3dc619a9a4c4fc9c2a0" exitCode=0 Oct 02 18:41:04 crc kubenswrapper[4832]: I1002 18:41:04.125813 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6trqf-config-rtw2m" event={"ID":"c4f4ba95-903f-47e8-8e24-fe0e963b434f","Type":"ContainerDied","Data":"a912356fa8f9d772762e7c377369ee2f6c3e83f78eb8f3dc619a9a4c4fc9c2a0"} Oct 02 18:41:04 crc kubenswrapper[4832]: I1002 18:41:04.128242 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-e572-account-create-8jstm" event={"ID":"10a6ce2b-ed5d-4115-9d49-a30fd313044e","Type":"ContainerDied","Data":"192c394a32d0518e62c8b8ae62494fb6ca4d299574d05a0930ceec76b595ef9a"} Oct 02 18:41:04 crc kubenswrapper[4832]: I1002 18:41:04.128291 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="192c394a32d0518e62c8b8ae62494fb6ca4d299574d05a0930ceec76b595ef9a" Oct 02 18:41:04 crc kubenswrapper[4832]: I1002 18:41:04.128327 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-e572-account-create-8jstm" Oct 02 18:41:04 crc kubenswrapper[4832]: I1002 18:41:04.970186 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2e33-account-create-2p75z"] Oct 02 18:41:04 crc kubenswrapper[4832]: E1002 18:41:04.970946 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a6ce2b-ed5d-4115-9d49-a30fd313044e" containerName="mariadb-account-create" Oct 02 18:41:04 crc kubenswrapper[4832]: I1002 18:41:04.970966 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a6ce2b-ed5d-4115-9d49-a30fd313044e" containerName="mariadb-account-create" Oct 02 18:41:04 crc kubenswrapper[4832]: E1002 18:41:04.971012 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f58f07d-fb3b-4be8-a9b0-221aa5c01316" containerName="swift-ring-rebalance" Oct 02 18:41:04 crc kubenswrapper[4832]: I1002 18:41:04.971021 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f58f07d-fb3b-4be8-a9b0-221aa5c01316" containerName="swift-ring-rebalance" Oct 02 18:41:04 crc kubenswrapper[4832]: I1002 18:41:04.971459 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a6ce2b-ed5d-4115-9d49-a30fd313044e" containerName="mariadb-account-create" Oct 02 18:41:04 crc kubenswrapper[4832]: I1002 18:41:04.971510 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f58f07d-fb3b-4be8-a9b0-221aa5c01316" containerName="swift-ring-rebalance" Oct 02 18:41:04 crc kubenswrapper[4832]: I1002 18:41:04.972336 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2e33-account-create-2p75z" Oct 02 18:41:04 crc kubenswrapper[4832]: I1002 18:41:04.978701 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 02 18:41:04 crc kubenswrapper[4832]: I1002 18:41:04.983371 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2e33-account-create-2p75z"] Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.076114 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx9lj\" (UniqueName: \"kubernetes.io/projected/f04348c8-f7ec-43e5-a7aa-f216ff10068e-kube-api-access-hx9lj\") pod \"keystone-2e33-account-create-2p75z\" (UID: \"f04348c8-f7ec-43e5-a7aa-f216ff10068e\") " pod="openstack/keystone-2e33-account-create-2p75z" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.178105 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx9lj\" (UniqueName: \"kubernetes.io/projected/f04348c8-f7ec-43e5-a7aa-f216ff10068e-kube-api-access-hx9lj\") pod \"keystone-2e33-account-create-2p75z\" (UID: \"f04348c8-f7ec-43e5-a7aa-f216ff10068e\") " pod="openstack/keystone-2e33-account-create-2p75z" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.204673 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx9lj\" (UniqueName: \"kubernetes.io/projected/f04348c8-f7ec-43e5-a7aa-f216ff10068e-kube-api-access-hx9lj\") pod \"keystone-2e33-account-create-2p75z\" (UID: \"f04348c8-f7ec-43e5-a7aa-f216ff10068e\") " pod="openstack/keystone-2e33-account-create-2p75z" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.281557 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5d28-account-create-72ps9"] Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.286566 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d28-account-create-72ps9" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.301584 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5d28-account-create-72ps9"] Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.304240 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2e33-account-create-2p75z" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.306596 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.384779 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhlgq\" (UniqueName: \"kubernetes.io/projected/bc75c972-2235-4b3f-8483-14f22d20f58f-kube-api-access-qhlgq\") pod \"placement-5d28-account-create-72ps9\" (UID: \"bc75c972-2235-4b3f-8483-14f22d20f58f\") " pod="openstack/placement-5d28-account-create-72ps9" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.487674 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhlgq\" (UniqueName: \"kubernetes.io/projected/bc75c972-2235-4b3f-8483-14f22d20f58f-kube-api-access-qhlgq\") pod \"placement-5d28-account-create-72ps9\" (UID: \"bc75c972-2235-4b3f-8483-14f22d20f58f\") " pod="openstack/placement-5d28-account-create-72ps9" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.534918 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhlgq\" (UniqueName: \"kubernetes.io/projected/bc75c972-2235-4b3f-8483-14f22d20f58f-kube-api-access-qhlgq\") pod \"placement-5d28-account-create-72ps9\" (UID: \"bc75c972-2235-4b3f-8483-14f22d20f58f\") " pod="openstack/placement-5d28-account-create-72ps9" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.621943 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d28-account-create-72ps9" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.737515 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-665w5"] Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.738907 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-665w5" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.749007 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-665w5"] Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.751225 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.756592 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rvnpc" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.897044 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldrnc\" (UniqueName: \"kubernetes.io/projected/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-kube-api-access-ldrnc\") pod \"glance-db-sync-665w5\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " pod="openstack/glance-db-sync-665w5" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.897198 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-combined-ca-bundle\") pod \"glance-db-sync-665w5\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " pod="openstack/glance-db-sync-665w5" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.897273 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-config-data\") pod \"glance-db-sync-665w5\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " pod="openstack/glance-db-sync-665w5" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.897301 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-db-sync-config-data\") pod \"glance-db-sync-665w5\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " pod="openstack/glance-db-sync-665w5" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.946367 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.999222 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-combined-ca-bundle\") pod \"glance-db-sync-665w5\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " pod="openstack/glance-db-sync-665w5" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.999329 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-config-data\") pod \"glance-db-sync-665w5\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " pod="openstack/glance-db-sync-665w5" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.999360 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-db-sync-config-data\") pod \"glance-db-sync-665w5\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " pod="openstack/glance-db-sync-665w5" Oct 02 18:41:05 crc kubenswrapper[4832]: I1002 18:41:05.999400 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldrnc\" (UniqueName: \"kubernetes.io/projected/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-kube-api-access-ldrnc\") pod \"glance-db-sync-665w5\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " pod="openstack/glance-db-sync-665w5" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.007210 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-config-data\") pod \"glance-db-sync-665w5\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " pod="openstack/glance-db-sync-665w5" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.007687 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-db-sync-config-data\") pod \"glance-db-sync-665w5\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " pod="openstack/glance-db-sync-665w5" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.009290 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-combined-ca-bundle\") pod \"glance-db-sync-665w5\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " pod="openstack/glance-db-sync-665w5" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.019868 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldrnc\" (UniqueName: \"kubernetes.io/projected/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-kube-api-access-ldrnc\") pod \"glance-db-sync-665w5\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " pod="openstack/glance-db-sync-665w5" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.081655 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-665w5" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.100862 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-run\") pod \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.100916 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-run-ovn\") pod \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.101058 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fzrb\" (UniqueName: \"kubernetes.io/projected/c4f4ba95-903f-47e8-8e24-fe0e963b434f-kube-api-access-4fzrb\") pod \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.101155 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4ba95-903f-47e8-8e24-fe0e963b434f-additional-scripts\") pod \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.101201 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4ba95-903f-47e8-8e24-fe0e963b434f-scripts\") pod \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.101251 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-log-ovn\") pod \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\" (UID: \"c4f4ba95-903f-47e8-8e24-fe0e963b434f\") " Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.101863 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c4f4ba95-903f-47e8-8e24-fe0e963b434f" (UID: "c4f4ba95-903f-47e8-8e24-fe0e963b434f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.101907 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-run" (OuterVolumeSpecName: "var-run") pod "c4f4ba95-903f-47e8-8e24-fe0e963b434f" (UID: "c4f4ba95-903f-47e8-8e24-fe0e963b434f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.101927 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c4f4ba95-903f-47e8-8e24-fe0e963b434f" (UID: "c4f4ba95-903f-47e8-8e24-fe0e963b434f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.102769 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f4ba95-903f-47e8-8e24-fe0e963b434f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c4f4ba95-903f-47e8-8e24-fe0e963b434f" (UID: "c4f4ba95-903f-47e8-8e24-fe0e963b434f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.103584 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f4ba95-903f-47e8-8e24-fe0e963b434f-scripts" (OuterVolumeSpecName: "scripts") pod "c4f4ba95-903f-47e8-8e24-fe0e963b434f" (UID: "c4f4ba95-903f-47e8-8e24-fe0e963b434f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.109543 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f4ba95-903f-47e8-8e24-fe0e963b434f-kube-api-access-4fzrb" (OuterVolumeSpecName: "kube-api-access-4fzrb") pod "c4f4ba95-903f-47e8-8e24-fe0e963b434f" (UID: "c4f4ba95-903f-47e8-8e24-fe0e963b434f"). InnerVolumeSpecName "kube-api-access-4fzrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.131111 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2e33-account-create-2p75z"] Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.153924 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6trqf-config-rtw2m" event={"ID":"c4f4ba95-903f-47e8-8e24-fe0e963b434f","Type":"ContainerDied","Data":"d92ae05b8f207dc9571699bbcc0d7b5caab5eaed4a92173992b4d8f03a41ec5a"} Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.153960 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d92ae05b8f207dc9571699bbcc0d7b5caab5eaed4a92173992b4d8f03a41ec5a" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.154012 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6trqf-config-rtw2m" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.203156 4832 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.203179 4832 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-run\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.203187 4832 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c4f4ba95-903f-47e8-8e24-fe0e963b434f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.203198 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fzrb\" (UniqueName: \"kubernetes.io/projected/c4f4ba95-903f-47e8-8e24-fe0e963b434f-kube-api-access-4fzrb\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.203211 4832 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4ba95-903f-47e8-8e24-fe0e963b434f-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.203219 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c4f4ba95-903f-47e8-8e24-fe0e963b434f-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:06 crc kubenswrapper[4832]: I1002 18:41:06.300498 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5d28-account-create-72ps9"] Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:06.708689 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-665w5"] Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.072017 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-6trqf-config-rtw2m"] Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.105904 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-6trqf-config-rtw2m"] Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.167056 4832 generic.go:334] "Generic (PLEG): container finished" podID="f04348c8-f7ec-43e5-a7aa-f216ff10068e" containerID="96fb5b4f09e2dbc4cd05f6c754a168bdd6fc655dc0cb24719a58c5573a70871c" exitCode=0 Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.167254 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2e33-account-create-2p75z" event={"ID":"f04348c8-f7ec-43e5-a7aa-f216ff10068e","Type":"ContainerDied","Data":"96fb5b4f09e2dbc4cd05f6c754a168bdd6fc655dc0cb24719a58c5573a70871c"} Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.167373 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2e33-account-create-2p75z" event={"ID":"f04348c8-f7ec-43e5-a7aa-f216ff10068e","Type":"ContainerStarted","Data":"5af1072c394b9f647e755ef0113b755147d37ec7cae5500c8e9877dc75e744c4"} Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.169176 4832 generic.go:334] "Generic (PLEG): container finished" podID="bc75c972-2235-4b3f-8483-14f22d20f58f" containerID="2502e6695845ba61cf2acae39c411c1c8ad40db05cd20265e119f407204e7d3f" exitCode=0 Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.169271 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d28-account-create-72ps9" event={"ID":"bc75c972-2235-4b3f-8483-14f22d20f58f","Type":"ContainerDied","Data":"2502e6695845ba61cf2acae39c411c1c8ad40db05cd20265e119f407204e7d3f"} Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.169302 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d28-account-create-72ps9" event={"ID":"bc75c972-2235-4b3f-8483-14f22d20f58f","Type":"ContainerStarted","Data":"0267a8fd7870b3b3fc4ce83b1428b75d47d9f384ebaff7fe947473c6ea648507"} Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.170316 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-665w5" event={"ID":"c83e9ef5-26f5-4ec5-b70c-c28549d863f6","Type":"ContainerStarted","Data":"4b376820e34e47f0e4f57603a67e2d60758b878f970220fdd74ea7f628d77d5a"} Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.241771 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f4ba95-903f-47e8-8e24-fe0e963b434f" path="/var/lib/kubelet/pods/c4f4ba95-903f-47e8-8e24-fe0e963b434f/volumes" Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.275541 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-fec3-account-create-brv2h"] Oct 02 18:41:07 crc kubenswrapper[4832]: E1002 18:41:07.276035 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f4ba95-903f-47e8-8e24-fe0e963b434f" containerName="ovn-config" Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.276047 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f4ba95-903f-47e8-8e24-fe0e963b434f" containerName="ovn-config" Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.276241 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f4ba95-903f-47e8-8e24-fe0e963b434f" containerName="ovn-config" Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.277011 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fec3-account-create-brv2h" Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.279485 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.300374 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-fec3-account-create-brv2h"] Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.431243 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxts\" (UniqueName: \"kubernetes.io/projected/95b6662e-93a0-45b2-9f20-d840085858f3-kube-api-access-lzxts\") pod \"mysqld-exporter-fec3-account-create-brv2h\" (UID: \"95b6662e-93a0-45b2-9f20-d840085858f3\") " pod="openstack/mysqld-exporter-fec3-account-create-brv2h" Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.533050 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzxts\" (UniqueName: \"kubernetes.io/projected/95b6662e-93a0-45b2-9f20-d840085858f3-kube-api-access-lzxts\") pod \"mysqld-exporter-fec3-account-create-brv2h\" (UID: \"95b6662e-93a0-45b2-9f20-d840085858f3\") " pod="openstack/mysqld-exporter-fec3-account-create-brv2h" Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.568179 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzxts\" (UniqueName: \"kubernetes.io/projected/95b6662e-93a0-45b2-9f20-d840085858f3-kube-api-access-lzxts\") pod \"mysqld-exporter-fec3-account-create-brv2h\" (UID: \"95b6662e-93a0-45b2-9f20-d840085858f3\") " pod="openstack/mysqld-exporter-fec3-account-create-brv2h" Oct 02 18:41:07 crc kubenswrapper[4832]: I1002 18:41:07.601985 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fec3-account-create-brv2h" Oct 02 18:41:08 crc kubenswrapper[4832]: I1002 18:41:08.087130 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-fec3-account-create-brv2h"] Oct 02 18:41:08 crc kubenswrapper[4832]: W1002 18:41:08.089090 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95b6662e_93a0_45b2_9f20_d840085858f3.slice/crio-51d1dea09f4d72f5b0211f3c027cc30b64afc5f640dffe895c3bc3d30093054a WatchSource:0}: Error finding container 51d1dea09f4d72f5b0211f3c027cc30b64afc5f640dffe895c3bc3d30093054a: Status 404 returned error can't find the container with id 51d1dea09f4d72f5b0211f3c027cc30b64afc5f640dffe895c3bc3d30093054a Oct 02 18:41:08 crc kubenswrapper[4832]: I1002 18:41:08.181638 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-fec3-account-create-brv2h" event={"ID":"95b6662e-93a0-45b2-9f20-d840085858f3","Type":"ContainerStarted","Data":"51d1dea09f4d72f5b0211f3c027cc30b64afc5f640dffe895c3bc3d30093054a"} Oct 02 18:41:08 crc kubenswrapper[4832]: I1002 18:41:08.718442 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d28-account-create-72ps9" Oct 02 18:41:08 crc kubenswrapper[4832]: I1002 18:41:08.725740 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2e33-account-create-2p75z" Oct 02 18:41:08 crc kubenswrapper[4832]: I1002 18:41:08.864422 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx9lj\" (UniqueName: \"kubernetes.io/projected/f04348c8-f7ec-43e5-a7aa-f216ff10068e-kube-api-access-hx9lj\") pod \"f04348c8-f7ec-43e5-a7aa-f216ff10068e\" (UID: \"f04348c8-f7ec-43e5-a7aa-f216ff10068e\") " Oct 02 18:41:08 crc kubenswrapper[4832]: I1002 18:41:08.864458 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhlgq\" (UniqueName: \"kubernetes.io/projected/bc75c972-2235-4b3f-8483-14f22d20f58f-kube-api-access-qhlgq\") pod \"bc75c972-2235-4b3f-8483-14f22d20f58f\" (UID: \"bc75c972-2235-4b3f-8483-14f22d20f58f\") " Oct 02 18:41:08 crc kubenswrapper[4832]: I1002 18:41:08.869763 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc75c972-2235-4b3f-8483-14f22d20f58f-kube-api-access-qhlgq" (OuterVolumeSpecName: "kube-api-access-qhlgq") pod "bc75c972-2235-4b3f-8483-14f22d20f58f" (UID: "bc75c972-2235-4b3f-8483-14f22d20f58f"). InnerVolumeSpecName "kube-api-access-qhlgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:08 crc kubenswrapper[4832]: I1002 18:41:08.885174 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f04348c8-f7ec-43e5-a7aa-f216ff10068e-kube-api-access-hx9lj" (OuterVolumeSpecName: "kube-api-access-hx9lj") pod "f04348c8-f7ec-43e5-a7aa-f216ff10068e" (UID: "f04348c8-f7ec-43e5-a7aa-f216ff10068e"). InnerVolumeSpecName "kube-api-access-hx9lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:08 crc kubenswrapper[4832]: I1002 18:41:08.966362 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx9lj\" (UniqueName: \"kubernetes.io/projected/f04348c8-f7ec-43e5-a7aa-f216ff10068e-kube-api-access-hx9lj\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:08 crc kubenswrapper[4832]: I1002 18:41:08.966389 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhlgq\" (UniqueName: \"kubernetes.io/projected/bc75c972-2235-4b3f-8483-14f22d20f58f-kube-api-access-qhlgq\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:09 crc kubenswrapper[4832]: I1002 18:41:09.194647 4832 generic.go:334] "Generic (PLEG): container finished" podID="95b6662e-93a0-45b2-9f20-d840085858f3" containerID="88d288e28e942d9a2bc5b757c86807ecb22509ca4dbff5612c08f8daa0e960a3" exitCode=0 Oct 02 18:41:09 crc kubenswrapper[4832]: I1002 18:41:09.194723 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-fec3-account-create-brv2h" event={"ID":"95b6662e-93a0-45b2-9f20-d840085858f3","Type":"ContainerDied","Data":"88d288e28e942d9a2bc5b757c86807ecb22509ca4dbff5612c08f8daa0e960a3"} Oct 02 18:41:09 crc kubenswrapper[4832]: I1002 18:41:09.196313 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2e33-account-create-2p75z" event={"ID":"f04348c8-f7ec-43e5-a7aa-f216ff10068e","Type":"ContainerDied","Data":"5af1072c394b9f647e755ef0113b755147d37ec7cae5500c8e9877dc75e744c4"} Oct 02 18:41:09 crc kubenswrapper[4832]: I1002 18:41:09.196335 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5af1072c394b9f647e755ef0113b755147d37ec7cae5500c8e9877dc75e744c4" Oct 02 18:41:09 crc kubenswrapper[4832]: I1002 18:41:09.196393 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2e33-account-create-2p75z" Oct 02 18:41:09 crc kubenswrapper[4832]: I1002 18:41:09.204230 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5d28-account-create-72ps9" event={"ID":"bc75c972-2235-4b3f-8483-14f22d20f58f","Type":"ContainerDied","Data":"0267a8fd7870b3b3fc4ce83b1428b75d47d9f384ebaff7fe947473c6ea648507"} Oct 02 18:41:09 crc kubenswrapper[4832]: I1002 18:41:09.204314 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0267a8fd7870b3b3fc4ce83b1428b75d47d9f384ebaff7fe947473c6ea648507" Oct 02 18:41:09 crc kubenswrapper[4832]: I1002 18:41:09.204280 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5d28-account-create-72ps9" Oct 02 18:41:10 crc kubenswrapper[4832]: I1002 18:41:10.396001 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:41:10 crc kubenswrapper[4832]: I1002 18:41:10.405965 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/df7b8400-95d5-481a-a9a1-d5b2586f159f-etc-swift\") pod \"swift-storage-0\" (UID: \"df7b8400-95d5-481a-a9a1-d5b2586f159f\") " pod="openstack/swift-storage-0" Oct 02 18:41:10 crc kubenswrapper[4832]: I1002 18:41:10.490512 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 02 18:41:10 crc kubenswrapper[4832]: I1002 18:41:10.789579 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fec3-account-create-brv2h" Oct 02 18:41:10 crc kubenswrapper[4832]: I1002 18:41:10.908039 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzxts\" (UniqueName: \"kubernetes.io/projected/95b6662e-93a0-45b2-9f20-d840085858f3-kube-api-access-lzxts\") pod \"95b6662e-93a0-45b2-9f20-d840085858f3\" (UID: \"95b6662e-93a0-45b2-9f20-d840085858f3\") " Oct 02 18:41:10 crc kubenswrapper[4832]: I1002 18:41:10.913032 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95b6662e-93a0-45b2-9f20-d840085858f3-kube-api-access-lzxts" (OuterVolumeSpecName: "kube-api-access-lzxts") pod "95b6662e-93a0-45b2-9f20-d840085858f3" (UID: "95b6662e-93a0-45b2-9f20-d840085858f3"). InnerVolumeSpecName "kube-api-access-lzxts". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:11 crc kubenswrapper[4832]: I1002 18:41:11.010127 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzxts\" (UniqueName: \"kubernetes.io/projected/95b6662e-93a0-45b2-9f20-d840085858f3-kube-api-access-lzxts\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:11 crc kubenswrapper[4832]: I1002 18:41:11.058950 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-6trqf" Oct 02 18:41:11 crc kubenswrapper[4832]: I1002 18:41:11.086450 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 02 18:41:11 crc kubenswrapper[4832]: W1002 18:41:11.090780 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf7b8400_95d5_481a_a9a1_d5b2586f159f.slice/crio-790dbe4cf85b335e1d385380a0aaafc889ff11f7fe08c7d96df49515f951d50b WatchSource:0}: Error finding container 790dbe4cf85b335e1d385380a0aaafc889ff11f7fe08c7d96df49515f951d50b: Status 404 returned error can't find the container with id 790dbe4cf85b335e1d385380a0aaafc889ff11f7fe08c7d96df49515f951d50b Oct 02 18:41:11 crc kubenswrapper[4832]: I1002 18:41:11.094139 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 18:41:11 crc kubenswrapper[4832]: I1002 18:41:11.229797 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-fec3-account-create-brv2h" Oct 02 18:41:11 crc kubenswrapper[4832]: I1002 18:41:11.234671 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"790dbe4cf85b335e1d385380a0aaafc889ff11f7fe08c7d96df49515f951d50b"} Oct 02 18:41:11 crc kubenswrapper[4832]: I1002 18:41:11.234727 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-fec3-account-create-brv2h" event={"ID":"95b6662e-93a0-45b2-9f20-d840085858f3","Type":"ContainerDied","Data":"51d1dea09f4d72f5b0211f3c027cc30b64afc5f640dffe895c3bc3d30093054a"} Oct 02 18:41:11 crc kubenswrapper[4832]: I1002 18:41:11.234746 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51d1dea09f4d72f5b0211f3c027cc30b64afc5f640dffe895c3bc3d30093054a" Oct 02 18:41:11 crc kubenswrapper[4832]: I1002 18:41:11.782408 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:41:12 crc kubenswrapper[4832]: I1002 18:41:12.025649 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 02 18:41:12 crc kubenswrapper[4832]: I1002 18:41:12.385893 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c"] Oct 02 18:41:12 crc kubenswrapper[4832]: E1002 18:41:12.386283 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f04348c8-f7ec-43e5-a7aa-f216ff10068e" containerName="mariadb-account-create" Oct 02 18:41:12 crc kubenswrapper[4832]: I1002 18:41:12.386299 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f04348c8-f7ec-43e5-a7aa-f216ff10068e" containerName="mariadb-account-create" Oct 02 18:41:12 crc kubenswrapper[4832]: E1002 18:41:12.386332 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b6662e-93a0-45b2-9f20-d840085858f3" containerName="mariadb-account-create" Oct 02 18:41:12 crc kubenswrapper[4832]: I1002 18:41:12.386339 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b6662e-93a0-45b2-9f20-d840085858f3" containerName="mariadb-account-create" Oct 02 18:41:12 crc kubenswrapper[4832]: E1002 18:41:12.386349 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc75c972-2235-4b3f-8483-14f22d20f58f" containerName="mariadb-account-create" Oct 02 18:41:12 crc kubenswrapper[4832]: I1002 18:41:12.386354 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc75c972-2235-4b3f-8483-14f22d20f58f" containerName="mariadb-account-create" Oct 02 18:41:12 crc kubenswrapper[4832]: I1002 18:41:12.386538 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc75c972-2235-4b3f-8483-14f22d20f58f" containerName="mariadb-account-create" Oct 02 18:41:12 crc kubenswrapper[4832]: I1002 18:41:12.386559 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f04348c8-f7ec-43e5-a7aa-f216ff10068e" containerName="mariadb-account-create" Oct 02 18:41:12 crc kubenswrapper[4832]: I1002 18:41:12.386574 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="95b6662e-93a0-45b2-9f20-d840085858f3" containerName="mariadb-account-create" Oct 02 18:41:12 crc kubenswrapper[4832]: I1002 18:41:12.387173 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c" Oct 02 18:41:12 crc kubenswrapper[4832]: I1002 18:41:12.412332 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c"] Oct 02 18:41:12 crc kubenswrapper[4832]: I1002 18:41:12.441857 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jxlb\" (UniqueName: \"kubernetes.io/projected/9527bcd2-d70f-485b-a5f7-68ba69f883ec-kube-api-access-6jxlb\") pod \"mysqld-exporter-openstack-cell1-db-create-kdc6c\" (UID: \"9527bcd2-d70f-485b-a5f7-68ba69f883ec\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c" Oct 02 18:41:12 crc kubenswrapper[4832]: I1002 18:41:12.543824 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jxlb\" (UniqueName: \"kubernetes.io/projected/9527bcd2-d70f-485b-a5f7-68ba69f883ec-kube-api-access-6jxlb\") pod \"mysqld-exporter-openstack-cell1-db-create-kdc6c\" (UID: \"9527bcd2-d70f-485b-a5f7-68ba69f883ec\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c" Oct 02 18:41:12 crc kubenswrapper[4832]: I1002 18:41:12.560370 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jxlb\" (UniqueName: \"kubernetes.io/projected/9527bcd2-d70f-485b-a5f7-68ba69f883ec-kube-api-access-6jxlb\") pod \"mysqld-exporter-openstack-cell1-db-create-kdc6c\" (UID: \"9527bcd2-d70f-485b-a5f7-68ba69f883ec\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c" Oct 02 18:41:12 crc kubenswrapper[4832]: I1002 18:41:12.704399 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c" Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.194680 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c"] Oct 02 18:41:13 crc kubenswrapper[4832]: W1002 18:41:13.218438 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9527bcd2_d70f_485b_a5f7_68ba69f883ec.slice/crio-80e0082105aca5101eff3cd21a0285a0a158a1c3d2b615acf9f8bb61eb2c64e1 WatchSource:0}: Error finding container 80e0082105aca5101eff3cd21a0285a0a158a1c3d2b615acf9f8bb61eb2c64e1: Status 404 returned error can't find the container with id 80e0082105aca5101eff3cd21a0285a0a158a1c3d2b615acf9f8bb61eb2c64e1 Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.266453 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"71dd3fec4e9ca51d37995984e21963877f230b0cfe68abf7bbbe56d4972b42c3"} Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.266536 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"4095b726800870239614750f6c3968579635e13f0e8b9f9526b12ea754970501"} Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.274131 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c" event={"ID":"9527bcd2-d70f-485b-a5f7-68ba69f883ec","Type":"ContainerStarted","Data":"80e0082105aca5101eff3cd21a0285a0a158a1c3d2b615acf9f8bb61eb2c64e1"} Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.563577 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.565930 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.801907 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-g48l8"] Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.803155 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g48l8" Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.815815 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-g48l8"] Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.871121 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jphg6\" (UniqueName: \"kubernetes.io/projected/878da378-32e5-4349-9902-1f0a9f75c7c1-kube-api-access-jphg6\") pod \"cinder-db-create-g48l8\" (UID: \"878da378-32e5-4349-9902-1f0a9f75c7c1\") " pod="openstack/cinder-db-create-g48l8" Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.888453 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-6hsqw"] Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.889674 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6hsqw" Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.897977 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6hsqw"] Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.973467 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-278hb\" (UniqueName: \"kubernetes.io/projected/78425b20-8e1d-4853-89a4-09a2c47be243-kube-api-access-278hb\") pod \"barbican-db-create-6hsqw\" (UID: \"78425b20-8e1d-4853-89a4-09a2c47be243\") " pod="openstack/barbican-db-create-6hsqw" Oct 02 18:41:13 crc kubenswrapper[4832]: I1002 18:41:13.973841 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jphg6\" (UniqueName: \"kubernetes.io/projected/878da378-32e5-4349-9902-1f0a9f75c7c1-kube-api-access-jphg6\") pod \"cinder-db-create-g48l8\" (UID: \"878da378-32e5-4349-9902-1f0a9f75c7c1\") " pod="openstack/cinder-db-create-g48l8" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.009141 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jphg6\" (UniqueName: \"kubernetes.io/projected/878da378-32e5-4349-9902-1f0a9f75c7c1-kube-api-access-jphg6\") pod \"cinder-db-create-g48l8\" (UID: \"878da378-32e5-4349-9902-1f0a9f75c7c1\") " pod="openstack/cinder-db-create-g48l8" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.014457 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-h9rpt"] Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.016034 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-h9rpt" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.029396 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-h9rpt"] Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.078633 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-278hb\" (UniqueName: \"kubernetes.io/projected/78425b20-8e1d-4853-89a4-09a2c47be243-kube-api-access-278hb\") pod \"barbican-db-create-6hsqw\" (UID: \"78425b20-8e1d-4853-89a4-09a2c47be243\") " pod="openstack/barbican-db-create-6hsqw" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.079111 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr9tx\" (UniqueName: \"kubernetes.io/projected/96d82e23-d6e0-4faf-922d-505c1e637644-kube-api-access-lr9tx\") pod \"heat-db-create-h9rpt\" (UID: \"96d82e23-d6e0-4faf-922d-505c1e637644\") " pod="openstack/heat-db-create-h9rpt" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.115023 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-278hb\" (UniqueName: \"kubernetes.io/projected/78425b20-8e1d-4853-89a4-09a2c47be243-kube-api-access-278hb\") pod \"barbican-db-create-6hsqw\" (UID: \"78425b20-8e1d-4853-89a4-09a2c47be243\") " pod="openstack/barbican-db-create-6hsqw" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.120898 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g48l8" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.165468 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-vn8w7"] Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.168782 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vn8w7" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.170716 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r8zxk" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.175685 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vn8w7"] Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.180683 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.181105 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.181125 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.181518 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr9tx\" (UniqueName: \"kubernetes.io/projected/96d82e23-d6e0-4faf-922d-505c1e637644-kube-api-access-lr9tx\") pod \"heat-db-create-h9rpt\" (UID: \"96d82e23-d6e0-4faf-922d-505c1e637644\") " pod="openstack/heat-db-create-h9rpt" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.203751 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6hsqw" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.205412 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr9tx\" (UniqueName: \"kubernetes.io/projected/96d82e23-d6e0-4faf-922d-505c1e637644-kube-api-access-lr9tx\") pod \"heat-db-create-h9rpt\" (UID: \"96d82e23-d6e0-4faf-922d-505c1e637644\") " pod="openstack/heat-db-create-h9rpt" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.284505 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e2d35-0357-4de4-9315-58efaca557f9-config-data\") pod \"keystone-db-sync-vn8w7\" (UID: \"373e2d35-0357-4de4-9315-58efaca557f9\") " pod="openstack/keystone-db-sync-vn8w7" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.284661 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e2d35-0357-4de4-9315-58efaca557f9-combined-ca-bundle\") pod \"keystone-db-sync-vn8w7\" (UID: \"373e2d35-0357-4de4-9315-58efaca557f9\") " pod="openstack/keystone-db-sync-vn8w7" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.284793 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt52p\" (UniqueName: \"kubernetes.io/projected/373e2d35-0357-4de4-9315-58efaca557f9-kube-api-access-vt52p\") pod \"keystone-db-sync-vn8w7\" (UID: \"373e2d35-0357-4de4-9315-58efaca557f9\") " pod="openstack/keystone-db-sync-vn8w7" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.290823 4832 generic.go:334] "Generic (PLEG): container finished" podID="9527bcd2-d70f-485b-a5f7-68ba69f883ec" containerID="ca2804bd7a6edd4e18308e8f4f9cafc16f23b7632a7042376b21c76dea4dbedd" exitCode=0 Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.290895 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c" event={"ID":"9527bcd2-d70f-485b-a5f7-68ba69f883ec","Type":"ContainerDied","Data":"ca2804bd7a6edd4e18308e8f4f9cafc16f23b7632a7042376b21c76dea4dbedd"} Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.294949 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-c7xhd"] Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.296164 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"3372f312b8190f9bf83a02b044e25038f531e3bbeb1b678a51565aa460150802"} Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.296194 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"8dc3cbc20739b49cc9e20575b32e51660cd0ffaad9911d24f4d7a0174542eb5e"} Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.296396 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c7xhd" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.303503 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.317050 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-c7xhd"] Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.353471 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-h9rpt" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.386179 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e2d35-0357-4de4-9315-58efaca557f9-combined-ca-bundle\") pod \"keystone-db-sync-vn8w7\" (UID: \"373e2d35-0357-4de4-9315-58efaca557f9\") " pod="openstack/keystone-db-sync-vn8w7" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.386372 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zv58\" (UniqueName: \"kubernetes.io/projected/b1841a9c-82f5-4ece-8913-264ec2f5bdb2-kube-api-access-4zv58\") pod \"neutron-db-create-c7xhd\" (UID: \"b1841a9c-82f5-4ece-8913-264ec2f5bdb2\") " pod="openstack/neutron-db-create-c7xhd" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.386407 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt52p\" (UniqueName: \"kubernetes.io/projected/373e2d35-0357-4de4-9315-58efaca557f9-kube-api-access-vt52p\") pod \"keystone-db-sync-vn8w7\" (UID: \"373e2d35-0357-4de4-9315-58efaca557f9\") " pod="openstack/keystone-db-sync-vn8w7" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.386425 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e2d35-0357-4de4-9315-58efaca557f9-config-data\") pod \"keystone-db-sync-vn8w7\" (UID: \"373e2d35-0357-4de4-9315-58efaca557f9\") " pod="openstack/keystone-db-sync-vn8w7" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.392442 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e2d35-0357-4de4-9315-58efaca557f9-config-data\") pod \"keystone-db-sync-vn8w7\" (UID: \"373e2d35-0357-4de4-9315-58efaca557f9\") " pod="openstack/keystone-db-sync-vn8w7" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.406144 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e2d35-0357-4de4-9315-58efaca557f9-combined-ca-bundle\") pod \"keystone-db-sync-vn8w7\" (UID: \"373e2d35-0357-4de4-9315-58efaca557f9\") " pod="openstack/keystone-db-sync-vn8w7" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.421955 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt52p\" (UniqueName: \"kubernetes.io/projected/373e2d35-0357-4de4-9315-58efaca557f9-kube-api-access-vt52p\") pod \"keystone-db-sync-vn8w7\" (UID: \"373e2d35-0357-4de4-9315-58efaca557f9\") " pod="openstack/keystone-db-sync-vn8w7" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.488041 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zv58\" (UniqueName: \"kubernetes.io/projected/b1841a9c-82f5-4ece-8913-264ec2f5bdb2-kube-api-access-4zv58\") pod \"neutron-db-create-c7xhd\" (UID: \"b1841a9c-82f5-4ece-8913-264ec2f5bdb2\") " pod="openstack/neutron-db-create-c7xhd" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.490447 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vn8w7" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.508480 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zv58\" (UniqueName: \"kubernetes.io/projected/b1841a9c-82f5-4ece-8913-264ec2f5bdb2-kube-api-access-4zv58\") pod \"neutron-db-create-c7xhd\" (UID: \"b1841a9c-82f5-4ece-8913-264ec2f5bdb2\") " pod="openstack/neutron-db-create-c7xhd" Oct 02 18:41:14 crc kubenswrapper[4832]: I1002 18:41:14.618800 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c7xhd" Oct 02 18:41:16 crc kubenswrapper[4832]: I1002 18:41:16.720822 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 02 18:41:16 crc kubenswrapper[4832]: I1002 18:41:16.721527 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="prometheus" containerID="cri-o://e90b5dbe1bd14707ec66e7587df1b454483eb11ca307138d4b652ffd026ac72f" gracePeriod=600 Oct 02 18:41:16 crc kubenswrapper[4832]: I1002 18:41:16.721574 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="thanos-sidecar" containerID="cri-o://24d450b2bccfad699fae31292628961de1d2e4b54f528dbff0eae0acf27e007d" gracePeriod=600 Oct 02 18:41:16 crc kubenswrapper[4832]: I1002 18:41:16.721631 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="config-reloader" containerID="cri-o://49c46f95d81f04ffa7f877971a12b555d7cc740d55c90cdf7d9f8bdfe3cb3a75" gracePeriod=600 Oct 02 18:41:17 crc kubenswrapper[4832]: I1002 18:41:17.337278 4832 generic.go:334] "Generic (PLEG): container finished" podID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerID="24d450b2bccfad699fae31292628961de1d2e4b54f528dbff0eae0acf27e007d" exitCode=0 Oct 02 18:41:17 crc kubenswrapper[4832]: I1002 18:41:17.337307 4832 generic.go:334] "Generic (PLEG): container finished" podID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerID="49c46f95d81f04ffa7f877971a12b555d7cc740d55c90cdf7d9f8bdfe3cb3a75" exitCode=0 Oct 02 18:41:17 crc kubenswrapper[4832]: I1002 18:41:17.337315 4832 generic.go:334] "Generic (PLEG): container finished" podID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerID="e90b5dbe1bd14707ec66e7587df1b454483eb11ca307138d4b652ffd026ac72f" exitCode=0 Oct 02 18:41:17 crc kubenswrapper[4832]: I1002 18:41:17.337334 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"536c7c21-106b-48f8-9238-37b85edbf5f2","Type":"ContainerDied","Data":"24d450b2bccfad699fae31292628961de1d2e4b54f528dbff0eae0acf27e007d"} Oct 02 18:41:17 crc kubenswrapper[4832]: I1002 18:41:17.337358 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"536c7c21-106b-48f8-9238-37b85edbf5f2","Type":"ContainerDied","Data":"49c46f95d81f04ffa7f877971a12b555d7cc740d55c90cdf7d9f8bdfe3cb3a75"} Oct 02 18:41:17 crc kubenswrapper[4832]: I1002 18:41:17.337367 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"536c7c21-106b-48f8-9238-37b85edbf5f2","Type":"ContainerDied","Data":"e90b5dbe1bd14707ec66e7587df1b454483eb11ca307138d4b652ffd026ac72f"} Oct 02 18:41:18 crc kubenswrapper[4832]: I1002 18:41:18.563807 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.138:9090/-/ready\": dial tcp 10.217.0.138:9090: connect: connection refused" Oct 02 18:41:23 crc kubenswrapper[4832]: E1002 18:41:23.044038 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Oct 02 18:41:23 crc kubenswrapper[4832]: E1002 18:41:23.045087 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldrnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-665w5_openstack(c83e9ef5-26f5-4ec5-b70c-c28549d863f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:41:23 crc kubenswrapper[4832]: E1002 18:41:23.046328 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-665w5" podUID="c83e9ef5-26f5-4ec5-b70c-c28549d863f6" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.265067 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.295397 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jxlb\" (UniqueName: \"kubernetes.io/projected/9527bcd2-d70f-485b-a5f7-68ba69f883ec-kube-api-access-6jxlb\") pod \"9527bcd2-d70f-485b-a5f7-68ba69f883ec\" (UID: \"9527bcd2-d70f-485b-a5f7-68ba69f883ec\") " Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.339215 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9527bcd2-d70f-485b-a5f7-68ba69f883ec-kube-api-access-6jxlb" (OuterVolumeSpecName: "kube-api-access-6jxlb") pod "9527bcd2-d70f-485b-a5f7-68ba69f883ec" (UID: "9527bcd2-d70f-485b-a5f7-68ba69f883ec"). InnerVolumeSpecName "kube-api-access-6jxlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.401937 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jxlb\" (UniqueName: \"kubernetes.io/projected/9527bcd2-d70f-485b-a5f7-68ba69f883ec-kube-api-access-6jxlb\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.411557 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.411672 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c" event={"ID":"9527bcd2-d70f-485b-a5f7-68ba69f883ec","Type":"ContainerDied","Data":"80e0082105aca5101eff3cd21a0285a0a158a1c3d2b615acf9f8bb61eb2c64e1"} Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.411720 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80e0082105aca5101eff3cd21a0285a0a158a1c3d2b615acf9f8bb61eb2c64e1" Oct 02 18:41:23 crc kubenswrapper[4832]: E1002 18:41:23.413340 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-665w5" podUID="c83e9ef5-26f5-4ec5-b70c-c28549d863f6" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.559683 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.605488 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-config\") pod \"536c7c21-106b-48f8-9238-37b85edbf5f2\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.605544 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/536c7c21-106b-48f8-9238-37b85edbf5f2-prometheus-metric-storage-rulefiles-0\") pod \"536c7c21-106b-48f8-9238-37b85edbf5f2\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.605568 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fknd\" (UniqueName: \"kubernetes.io/projected/536c7c21-106b-48f8-9238-37b85edbf5f2-kube-api-access-2fknd\") pod \"536c7c21-106b-48f8-9238-37b85edbf5f2\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.605814 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\") pod \"536c7c21-106b-48f8-9238-37b85edbf5f2\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.605840 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-web-config\") pod \"536c7c21-106b-48f8-9238-37b85edbf5f2\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.605871 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-thanos-prometheus-http-client-file\") pod \"536c7c21-106b-48f8-9238-37b85edbf5f2\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.605970 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/536c7c21-106b-48f8-9238-37b85edbf5f2-config-out\") pod \"536c7c21-106b-48f8-9238-37b85edbf5f2\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.606033 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/536c7c21-106b-48f8-9238-37b85edbf5f2-tls-assets\") pod \"536c7c21-106b-48f8-9238-37b85edbf5f2\" (UID: \"536c7c21-106b-48f8-9238-37b85edbf5f2\") " Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.606743 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536c7c21-106b-48f8-9238-37b85edbf5f2-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "536c7c21-106b-48f8-9238-37b85edbf5f2" (UID: "536c7c21-106b-48f8-9238-37b85edbf5f2"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.612771 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-config" (OuterVolumeSpecName: "config") pod "536c7c21-106b-48f8-9238-37b85edbf5f2" (UID: "536c7c21-106b-48f8-9238-37b85edbf5f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.612921 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536c7c21-106b-48f8-9238-37b85edbf5f2-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "536c7c21-106b-48f8-9238-37b85edbf5f2" (UID: "536c7c21-106b-48f8-9238-37b85edbf5f2"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.613138 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "536c7c21-106b-48f8-9238-37b85edbf5f2" (UID: "536c7c21-106b-48f8-9238-37b85edbf5f2"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.613348 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536c7c21-106b-48f8-9238-37b85edbf5f2-kube-api-access-2fknd" (OuterVolumeSpecName: "kube-api-access-2fknd") pod "536c7c21-106b-48f8-9238-37b85edbf5f2" (UID: "536c7c21-106b-48f8-9238-37b85edbf5f2"). InnerVolumeSpecName "kube-api-access-2fknd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.613685 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/536c7c21-106b-48f8-9238-37b85edbf5f2-config-out" (OuterVolumeSpecName: "config-out") pod "536c7c21-106b-48f8-9238-37b85edbf5f2" (UID: "536c7c21-106b-48f8-9238-37b85edbf5f2"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.640377 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2158a4f6-171d-4fe3-9618-2603be9b8651" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "536c7c21-106b-48f8-9238-37b85edbf5f2" (UID: "536c7c21-106b-48f8-9238-37b85edbf5f2"). InnerVolumeSpecName "pvc-2158a4f6-171d-4fe3-9618-2603be9b8651". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.653058 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-web-config" (OuterVolumeSpecName: "web-config") pod "536c7c21-106b-48f8-9238-37b85edbf5f2" (UID: "536c7c21-106b-48f8-9238-37b85edbf5f2"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.708005 4832 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/536c7c21-106b-48f8-9238-37b85edbf5f2-config-out\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.708040 4832 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/536c7c21-106b-48f8-9238-37b85edbf5f2-tls-assets\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.708050 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.708059 4832 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/536c7c21-106b-48f8-9238-37b85edbf5f2-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.708072 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fknd\" (UniqueName: \"kubernetes.io/projected/536c7c21-106b-48f8-9238-37b85edbf5f2-kube-api-access-2fknd\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.708112 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\") on node \"crc\" " Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.708123 4832 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-web-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.708132 4832 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/536c7c21-106b-48f8-9238-37b85edbf5f2-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.737682 4832 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.737867 4832 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2158a4f6-171d-4fe3-9618-2603be9b8651" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2158a4f6-171d-4fe3-9618-2603be9b8651") on node "crc" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.809664 4832 reconciler_common.go:293] "Volume detached for volume \"pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.964387 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-g48l8"] Oct 02 18:41:23 crc kubenswrapper[4832]: I1002 18:41:23.977924 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vn8w7"] Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.003151 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-c7xhd"] Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.161136 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-h9rpt"] Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.183185 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-6hsqw"] Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.421004 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-c7xhd" event={"ID":"b1841a9c-82f5-4ece-8913-264ec2f5bdb2","Type":"ContainerStarted","Data":"0912dbe6936cfcf869e955f985dc38ea70a43b6a93f4c1cb118b1eb8d7ae2a88"} Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.424618 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6hsqw" event={"ID":"78425b20-8e1d-4853-89a4-09a2c47be243","Type":"ContainerStarted","Data":"ab189fd5356a27264ba69b0eb768bd57720c8631da9f41798fc0b7e819d5483d"} Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.426953 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vn8w7" event={"ID":"373e2d35-0357-4de4-9315-58efaca557f9","Type":"ContainerStarted","Data":"a7edeb7092d100786f62116ecf198b9b64b9b7009280036f10626c98d9f4fd52"} Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.429464 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-h9rpt" event={"ID":"96d82e23-d6e0-4faf-922d-505c1e637644","Type":"ContainerStarted","Data":"c024700d9fa6cb3b02b0840b338583b6351fde90e4a1e8c2b846df6c4e9a992a"} Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.432594 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g48l8" event={"ID":"878da378-32e5-4349-9902-1f0a9f75c7c1","Type":"ContainerStarted","Data":"d412481f6f8250d36b528770a92b3ce09ae21aa0e9a4fccfc609b39041d0f5b6"} Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.434566 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"536c7c21-106b-48f8-9238-37b85edbf5f2","Type":"ContainerDied","Data":"cf42540a4b89b11c965629986713a0052e5b79dce1ba24d67e2820a833bdba9f"} Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.434623 4832 scope.go:117] "RemoveContainer" containerID="24d450b2bccfad699fae31292628961de1d2e4b54f528dbff0eae0acf27e007d" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.434750 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.463098 4832 scope.go:117] "RemoveContainer" containerID="49c46f95d81f04ffa7f877971a12b555d7cc740d55c90cdf7d9f8bdfe3cb3a75" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.485889 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.494291 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.514915 4832 scope.go:117] "RemoveContainer" containerID="e90b5dbe1bd14707ec66e7587df1b454483eb11ca307138d4b652ffd026ac72f" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.515013 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 02 18:41:24 crc kubenswrapper[4832]: E1002 18:41:24.517069 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="config-reloader" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.517094 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="config-reloader" Oct 02 18:41:24 crc kubenswrapper[4832]: E1002 18:41:24.517113 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9527bcd2-d70f-485b-a5f7-68ba69f883ec" containerName="mariadb-database-create" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.517121 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9527bcd2-d70f-485b-a5f7-68ba69f883ec" containerName="mariadb-database-create" Oct 02 18:41:24 crc kubenswrapper[4832]: E1002 18:41:24.517130 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="thanos-sidecar" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.517137 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="thanos-sidecar" Oct 02 18:41:24 crc kubenswrapper[4832]: E1002 18:41:24.517177 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="prometheus" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.517182 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="prometheus" Oct 02 18:41:24 crc kubenswrapper[4832]: E1002 18:41:24.517194 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="init-config-reloader" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.517200 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="init-config-reloader" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.517551 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9527bcd2-d70f-485b-a5f7-68ba69f883ec" containerName="mariadb-database-create" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.517576 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="thanos-sidecar" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.517590 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="prometheus" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.517608 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" containerName="config-reloader" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.519543 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.521166 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-pxmx9" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.521576 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.521820 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.522223 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.522375 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.527651 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.538866 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.552704 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.576421 4832 scope.go:117] "RemoveContainer" containerID="05fddf18aa25dc2b83a394bc823c1a48f99b5985179b5382fdcf393572c7197a" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.630903 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-config\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.630964 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpjsz\" (UniqueName: \"kubernetes.io/projected/091b8e1f-4994-4bc6-8be4-c5a44668e088-kube-api-access-gpjsz\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.630999 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.631023 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.631062 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/091b8e1f-4994-4bc6-8be4-c5a44668e088-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.631077 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.631113 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/091b8e1f-4994-4bc6-8be4-c5a44668e088-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.631157 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.631176 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.631193 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.631219 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/091b8e1f-4994-4bc6-8be4-c5a44668e088-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.733286 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.733344 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.733389 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/091b8e1f-4994-4bc6-8be4-c5a44668e088-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.733408 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.733453 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/091b8e1f-4994-4bc6-8be4-c5a44668e088-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.733496 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.733522 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.733542 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.733569 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/091b8e1f-4994-4bc6-8be4-c5a44668e088-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.733628 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-config\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.733659 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpjsz\" (UniqueName: \"kubernetes.io/projected/091b8e1f-4994-4bc6-8be4-c5a44668e088-kube-api-access-gpjsz\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.738558 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/091b8e1f-4994-4bc6-8be4-c5a44668e088-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.742418 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/091b8e1f-4994-4bc6-8be4-c5a44668e088-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.743808 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.745605 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/091b8e1f-4994-4bc6-8be4-c5a44668e088-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.748779 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.750101 4832 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.750137 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5c8a2e34be458981b4e8afb2283a74ad10fe8fcab075c40f435ae20b523e7bdf/globalmount\"" pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.751213 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.751875 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-config\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.752586 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.756330 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/091b8e1f-4994-4bc6-8be4-c5a44668e088-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.757429 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpjsz\" (UniqueName: \"kubernetes.io/projected/091b8e1f-4994-4bc6-8be4-c5a44668e088-kube-api-access-gpjsz\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.815761 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2158a4f6-171d-4fe3-9618-2603be9b8651\") pod \"prometheus-metric-storage-0\" (UID: \"091b8e1f-4994-4bc6-8be4-c5a44668e088\") " pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:24 crc kubenswrapper[4832]: I1002 18:41:24.842575 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:25 crc kubenswrapper[4832]: I1002 18:41:25.239582 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536c7c21-106b-48f8-9238-37b85edbf5f2" path="/var/lib/kubelet/pods/536c7c21-106b-48f8-9238-37b85edbf5f2/volumes" Oct 02 18:41:25 crc kubenswrapper[4832]: I1002 18:41:25.449073 4832 generic.go:334] "Generic (PLEG): container finished" podID="b1841a9c-82f5-4ece-8913-264ec2f5bdb2" containerID="86b306bfc6fa4454735b4e76575e5d07032b55ac0b3d86ddf619f1ce7430c7e3" exitCode=0 Oct 02 18:41:25 crc kubenswrapper[4832]: I1002 18:41:25.449128 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-c7xhd" event={"ID":"b1841a9c-82f5-4ece-8913-264ec2f5bdb2","Type":"ContainerDied","Data":"86b306bfc6fa4454735b4e76575e5d07032b55ac0b3d86ddf619f1ce7430c7e3"} Oct 02 18:41:25 crc kubenswrapper[4832]: I1002 18:41:25.452677 4832 generic.go:334] "Generic (PLEG): container finished" podID="78425b20-8e1d-4853-89a4-09a2c47be243" containerID="830350ec229b990c47f085c4b948f743b203a07b08f0deb0ebcf78bfaab3d580" exitCode=0 Oct 02 18:41:25 crc kubenswrapper[4832]: I1002 18:41:25.452729 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6hsqw" event={"ID":"78425b20-8e1d-4853-89a4-09a2c47be243","Type":"ContainerDied","Data":"830350ec229b990c47f085c4b948f743b203a07b08f0deb0ebcf78bfaab3d580"} Oct 02 18:41:25 crc kubenswrapper[4832]: I1002 18:41:25.454571 4832 generic.go:334] "Generic (PLEG): container finished" podID="96d82e23-d6e0-4faf-922d-505c1e637644" containerID="018efa9ba3e8f36edd8014d8aab640f3a4793ef64cac8cf6d445dece36750086" exitCode=0 Oct 02 18:41:25 crc kubenswrapper[4832]: I1002 18:41:25.454616 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-h9rpt" event={"ID":"96d82e23-d6e0-4faf-922d-505c1e637644","Type":"ContainerDied","Data":"018efa9ba3e8f36edd8014d8aab640f3a4793ef64cac8cf6d445dece36750086"} Oct 02 18:41:25 crc kubenswrapper[4832]: I1002 18:41:25.457140 4832 generic.go:334] "Generic (PLEG): container finished" podID="878da378-32e5-4349-9902-1f0a9f75c7c1" containerID="28bea44be94a0b461d34dbf1a376ed163a049ad142afcaee894335f27913c1df" exitCode=0 Oct 02 18:41:25 crc kubenswrapper[4832]: I1002 18:41:25.457223 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g48l8" event={"ID":"878da378-32e5-4349-9902-1f0a9f75c7c1","Type":"ContainerDied","Data":"28bea44be94a0b461d34dbf1a376ed163a049ad142afcaee894335f27913c1df"} Oct 02 18:41:25 crc kubenswrapper[4832]: I1002 18:41:25.478408 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"c90da0efff96c76c0fb901afc94d62d8d830347942244d13ea2411c4332f6ded"} Oct 02 18:41:25 crc kubenswrapper[4832]: I1002 18:41:25.478464 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"eeac3d69424b932cf40b5977d4686d0e00594a440990aa81567e90c51427d25a"} Oct 02 18:41:25 crc kubenswrapper[4832]: I1002 18:41:25.478476 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"60f060ea673fa97aa66632e9a6a8c6193edd95f26d08fc87c629377750e56bda"} Oct 02 18:41:25 crc kubenswrapper[4832]: I1002 18:41:25.529235 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Oct 02 18:41:26 crc kubenswrapper[4832]: I1002 18:41:26.496772 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"8835ca7b0cd435f720754b35c0d329cfdc1657ec7895bf00d2c3966aaeed31bb"} Oct 02 18:41:26 crc kubenswrapper[4832]: I1002 18:41:26.499670 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"091b8e1f-4994-4bc6-8be4-c5a44668e088","Type":"ContainerStarted","Data":"b9ca56436bdf83efa67620cd15cf47221dfdbffa350ae79720f1f48d5ce8ec42"} Oct 02 18:41:26 crc kubenswrapper[4832]: I1002 18:41:26.876140 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:41:26 crc kubenswrapper[4832]: I1002 18:41:26.876378 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:41:27 crc kubenswrapper[4832]: I1002 18:41:27.229795 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-h9rpt" Oct 02 18:41:27 crc kubenswrapper[4832]: I1002 18:41:27.295900 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr9tx\" (UniqueName: \"kubernetes.io/projected/96d82e23-d6e0-4faf-922d-505c1e637644-kube-api-access-lr9tx\") pod \"96d82e23-d6e0-4faf-922d-505c1e637644\" (UID: \"96d82e23-d6e0-4faf-922d-505c1e637644\") " Oct 02 18:41:27 crc kubenswrapper[4832]: I1002 18:41:27.528325 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-h9rpt" event={"ID":"96d82e23-d6e0-4faf-922d-505c1e637644","Type":"ContainerDied","Data":"c024700d9fa6cb3b02b0840b338583b6351fde90e4a1e8c2b846df6c4e9a992a"} Oct 02 18:41:27 crc kubenswrapper[4832]: I1002 18:41:27.528382 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c024700d9fa6cb3b02b0840b338583b6351fde90e4a1e8c2b846df6c4e9a992a" Oct 02 18:41:27 crc kubenswrapper[4832]: I1002 18:41:27.528466 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-h9rpt" Oct 02 18:41:27 crc kubenswrapper[4832]: I1002 18:41:27.867573 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96d82e23-d6e0-4faf-922d-505c1e637644-kube-api-access-lr9tx" (OuterVolumeSpecName: "kube-api-access-lr9tx") pod "96d82e23-d6e0-4faf-922d-505c1e637644" (UID: "96d82e23-d6e0-4faf-922d-505c1e637644"). InnerVolumeSpecName "kube-api-access-lr9tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:27 crc kubenswrapper[4832]: I1002 18:41:27.922620 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr9tx\" (UniqueName: \"kubernetes.io/projected/96d82e23-d6e0-4faf-922d-505c1e637644-kube-api-access-lr9tx\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.061473 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g48l8" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.068404 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6hsqw" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.078445 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c7xhd" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.127246 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jphg6\" (UniqueName: \"kubernetes.io/projected/878da378-32e5-4349-9902-1f0a9f75c7c1-kube-api-access-jphg6\") pod \"878da378-32e5-4349-9902-1f0a9f75c7c1\" (UID: \"878da378-32e5-4349-9902-1f0a9f75c7c1\") " Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.127560 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-278hb\" (UniqueName: \"kubernetes.io/projected/78425b20-8e1d-4853-89a4-09a2c47be243-kube-api-access-278hb\") pod \"78425b20-8e1d-4853-89a4-09a2c47be243\" (UID: \"78425b20-8e1d-4853-89a4-09a2c47be243\") " Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.127670 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zv58\" (UniqueName: \"kubernetes.io/projected/b1841a9c-82f5-4ece-8913-264ec2f5bdb2-kube-api-access-4zv58\") pod \"b1841a9c-82f5-4ece-8913-264ec2f5bdb2\" (UID: \"b1841a9c-82f5-4ece-8913-264ec2f5bdb2\") " Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.139275 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878da378-32e5-4349-9902-1f0a9f75c7c1-kube-api-access-jphg6" (OuterVolumeSpecName: "kube-api-access-jphg6") pod "878da378-32e5-4349-9902-1f0a9f75c7c1" (UID: "878da378-32e5-4349-9902-1f0a9f75c7c1"). InnerVolumeSpecName "kube-api-access-jphg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.148358 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78425b20-8e1d-4853-89a4-09a2c47be243-kube-api-access-278hb" (OuterVolumeSpecName: "kube-api-access-278hb") pod "78425b20-8e1d-4853-89a4-09a2c47be243" (UID: "78425b20-8e1d-4853-89a4-09a2c47be243"). InnerVolumeSpecName "kube-api-access-278hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.168150 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1841a9c-82f5-4ece-8913-264ec2f5bdb2-kube-api-access-4zv58" (OuterVolumeSpecName: "kube-api-access-4zv58") pod "b1841a9c-82f5-4ece-8913-264ec2f5bdb2" (UID: "b1841a9c-82f5-4ece-8913-264ec2f5bdb2"). InnerVolumeSpecName "kube-api-access-4zv58". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.229680 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jphg6\" (UniqueName: \"kubernetes.io/projected/878da378-32e5-4349-9902-1f0a9f75c7c1-kube-api-access-jphg6\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.229707 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-278hb\" (UniqueName: \"kubernetes.io/projected/78425b20-8e1d-4853-89a4-09a2c47be243-kube-api-access-278hb\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.229718 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zv58\" (UniqueName: \"kubernetes.io/projected/b1841a9c-82f5-4ece-8913-264ec2f5bdb2-kube-api-access-4zv58\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.539382 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-6hsqw" event={"ID":"78425b20-8e1d-4853-89a4-09a2c47be243","Type":"ContainerDied","Data":"ab189fd5356a27264ba69b0eb768bd57720c8631da9f41798fc0b7e819d5483d"} Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.539457 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab189fd5356a27264ba69b0eb768bd57720c8631da9f41798fc0b7e819d5483d" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.539391 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-6hsqw" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.541403 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-g48l8" event={"ID":"878da378-32e5-4349-9902-1f0a9f75c7c1","Type":"ContainerDied","Data":"d412481f6f8250d36b528770a92b3ce09ae21aa0e9a4fccfc609b39041d0f5b6"} Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.541500 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d412481f6f8250d36b528770a92b3ce09ae21aa0e9a4fccfc609b39041d0f5b6" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.541447 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-g48l8" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.543032 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-c7xhd" event={"ID":"b1841a9c-82f5-4ece-8913-264ec2f5bdb2","Type":"ContainerDied","Data":"0912dbe6936cfcf869e955f985dc38ea70a43b6a93f4c1cb118b1eb8d7ae2a88"} Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.543065 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c7xhd" Oct 02 18:41:28 crc kubenswrapper[4832]: I1002 18:41:28.543082 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0912dbe6936cfcf869e955f985dc38ea70a43b6a93f4c1cb118b1eb8d7ae2a88" Oct 02 18:41:29 crc kubenswrapper[4832]: I1002 18:41:29.561207 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"091b8e1f-4994-4bc6-8be4-c5a44668e088","Type":"ContainerStarted","Data":"3b118ebdc244e7e18afc8e6a8d8231e6bcd72e28a64667dc9d1dd42669f4ada0"} Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.585375 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-88a1-account-create-hdt2w"] Oct 02 18:41:32 crc kubenswrapper[4832]: E1002 18:41:32.586368 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96d82e23-d6e0-4faf-922d-505c1e637644" containerName="mariadb-database-create" Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.586383 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="96d82e23-d6e0-4faf-922d-505c1e637644" containerName="mariadb-database-create" Oct 02 18:41:32 crc kubenswrapper[4832]: E1002 18:41:32.586401 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878da378-32e5-4349-9902-1f0a9f75c7c1" containerName="mariadb-database-create" Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.586406 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="878da378-32e5-4349-9902-1f0a9f75c7c1" containerName="mariadb-database-create" Oct 02 18:41:32 crc kubenswrapper[4832]: E1002 18:41:32.586422 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78425b20-8e1d-4853-89a4-09a2c47be243" containerName="mariadb-database-create" Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.586502 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="78425b20-8e1d-4853-89a4-09a2c47be243" containerName="mariadb-database-create" Oct 02 18:41:32 crc kubenswrapper[4832]: E1002 18:41:32.586513 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1841a9c-82f5-4ece-8913-264ec2f5bdb2" containerName="mariadb-database-create" Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.586518 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1841a9c-82f5-4ece-8913-264ec2f5bdb2" containerName="mariadb-database-create" Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.586706 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="96d82e23-d6e0-4faf-922d-505c1e637644" containerName="mariadb-database-create" Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.586721 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1841a9c-82f5-4ece-8913-264ec2f5bdb2" containerName="mariadb-database-create" Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.586736 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="878da378-32e5-4349-9902-1f0a9f75c7c1" containerName="mariadb-database-create" Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.586747 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="78425b20-8e1d-4853-89a4-09a2c47be243" containerName="mariadb-database-create" Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.587528 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-88a1-account-create-hdt2w" Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.589808 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.596400 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-88a1-account-create-hdt2w"] Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.620560 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4gnh\" (UniqueName: \"kubernetes.io/projected/dd676a64-dc24-42de-9103-9e9d58390f23-kube-api-access-x4gnh\") pod \"mysqld-exporter-88a1-account-create-hdt2w\" (UID: \"dd676a64-dc24-42de-9103-9e9d58390f23\") " pod="openstack/mysqld-exporter-88a1-account-create-hdt2w" Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.722829 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4gnh\" (UniqueName: \"kubernetes.io/projected/dd676a64-dc24-42de-9103-9e9d58390f23-kube-api-access-x4gnh\") pod \"mysqld-exporter-88a1-account-create-hdt2w\" (UID: \"dd676a64-dc24-42de-9103-9e9d58390f23\") " pod="openstack/mysqld-exporter-88a1-account-create-hdt2w" Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.742723 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4gnh\" (UniqueName: \"kubernetes.io/projected/dd676a64-dc24-42de-9103-9e9d58390f23-kube-api-access-x4gnh\") pod \"mysqld-exporter-88a1-account-create-hdt2w\" (UID: \"dd676a64-dc24-42de-9103-9e9d58390f23\") " pod="openstack/mysqld-exporter-88a1-account-create-hdt2w" Oct 02 18:41:32 crc kubenswrapper[4832]: I1002 18:41:32.918783 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-88a1-account-create-hdt2w" Oct 02 18:41:33 crc kubenswrapper[4832]: I1002 18:41:33.554623 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-88a1-account-create-hdt2w"] Oct 02 18:41:33 crc kubenswrapper[4832]: W1002 18:41:33.565076 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd676a64_dc24_42de_9103_9e9d58390f23.slice/crio-6f8c5e99aaf142fd02d1c4f4b4517dbd8d93a64a8de8beaf442f3c072751db99 WatchSource:0}: Error finding container 6f8c5e99aaf142fd02d1c4f4b4517dbd8d93a64a8de8beaf442f3c072751db99: Status 404 returned error can't find the container with id 6f8c5e99aaf142fd02d1c4f4b4517dbd8d93a64a8de8beaf442f3c072751db99 Oct 02 18:41:33 crc kubenswrapper[4832]: I1002 18:41:33.637327 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vn8w7" event={"ID":"373e2d35-0357-4de4-9315-58efaca557f9","Type":"ContainerStarted","Data":"68bfd7208908963d522c5060d2c22b778f9177768f7999cebeceadeb768b85b8"} Oct 02 18:41:33 crc kubenswrapper[4832]: I1002 18:41:33.645891 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"27ee97623318f33af9ada4257b678fb439dd319e5d189d13301b02aba34ea5b3"} Oct 02 18:41:33 crc kubenswrapper[4832]: I1002 18:41:33.645933 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"facb4371e885fccc592860550dbd9065d3eb1a4031dcda04ebd58718814f8bff"} Oct 02 18:41:33 crc kubenswrapper[4832]: I1002 18:41:33.648179 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-88a1-account-create-hdt2w" event={"ID":"dd676a64-dc24-42de-9103-9e9d58390f23","Type":"ContainerStarted","Data":"6f8c5e99aaf142fd02d1c4f4b4517dbd8d93a64a8de8beaf442f3c072751db99"} Oct 02 18:41:33 crc kubenswrapper[4832]: I1002 18:41:33.664912 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-vn8w7" podStartSLOduration=10.821097574 podStartE2EDuration="19.66489189s" podCreationTimestamp="2025-10-02 18:41:14 +0000 UTC" firstStartedPulling="2025-10-02 18:41:24.185546713 +0000 UTC m=+1241.154989585" lastFinishedPulling="2025-10-02 18:41:33.029341019 +0000 UTC m=+1249.998783901" observedRunningTime="2025-10-02 18:41:33.65911911 +0000 UTC m=+1250.628561992" watchObservedRunningTime="2025-10-02 18:41:33.66489189 +0000 UTC m=+1250.634334762" Oct 02 18:41:33 crc kubenswrapper[4832]: I1002 18:41:33.963051 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-29f5-account-create-jjqdt"] Oct 02 18:41:33 crc kubenswrapper[4832]: I1002 18:41:33.967275 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-29f5-account-create-jjqdt" Oct 02 18:41:33 crc kubenswrapper[4832]: I1002 18:41:33.976813 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Oct 02 18:41:33 crc kubenswrapper[4832]: I1002 18:41:33.981174 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-29f5-account-create-jjqdt"] Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.050536 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54r6g\" (UniqueName: \"kubernetes.io/projected/55480b52-9d4b-4b5a-a8b0-4235287fc493-kube-api-access-54r6g\") pod \"heat-29f5-account-create-jjqdt\" (UID: \"55480b52-9d4b-4b5a-a8b0-4235287fc493\") " pod="openstack/heat-29f5-account-create-jjqdt" Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.138923 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0640-account-create-l2rlm"] Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.140297 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0640-account-create-l2rlm" Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.142140 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.152449 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54r6g\" (UniqueName: \"kubernetes.io/projected/55480b52-9d4b-4b5a-a8b0-4235287fc493-kube-api-access-54r6g\") pod \"heat-29f5-account-create-jjqdt\" (UID: \"55480b52-9d4b-4b5a-a8b0-4235287fc493\") " pod="openstack/heat-29f5-account-create-jjqdt" Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.152492 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwx9b\" (UniqueName: \"kubernetes.io/projected/439d6007-1cf6-40ac-9cff-272b66972580-kube-api-access-cwx9b\") pod \"neutron-0640-account-create-l2rlm\" (UID: \"439d6007-1cf6-40ac-9cff-272b66972580\") " pod="openstack/neutron-0640-account-create-l2rlm" Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.163898 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0640-account-create-l2rlm"] Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.176231 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54r6g\" (UniqueName: \"kubernetes.io/projected/55480b52-9d4b-4b5a-a8b0-4235287fc493-kube-api-access-54r6g\") pod \"heat-29f5-account-create-jjqdt\" (UID: \"55480b52-9d4b-4b5a-a8b0-4235287fc493\") " pod="openstack/heat-29f5-account-create-jjqdt" Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.254717 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwx9b\" (UniqueName: \"kubernetes.io/projected/439d6007-1cf6-40ac-9cff-272b66972580-kube-api-access-cwx9b\") pod \"neutron-0640-account-create-l2rlm\" (UID: \"439d6007-1cf6-40ac-9cff-272b66972580\") " pod="openstack/neutron-0640-account-create-l2rlm" Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.274631 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwx9b\" (UniqueName: \"kubernetes.io/projected/439d6007-1cf6-40ac-9cff-272b66972580-kube-api-access-cwx9b\") pod \"neutron-0640-account-create-l2rlm\" (UID: \"439d6007-1cf6-40ac-9cff-272b66972580\") " pod="openstack/neutron-0640-account-create-l2rlm" Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.300607 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-29f5-account-create-jjqdt" Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.471602 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0640-account-create-l2rlm" Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.692920 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"cbc77f4fc0fd3081d81210c92e6f7a15c15ed51227343ad4cbdfe1a731ca14c8"} Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.692965 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"11849e46d243259d3118c323d1475b9ecf7725d819abf07f9689f8d30e305ef7"} Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.692977 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"62d9c400074a7b9b30090fda0f7d43558c5a6fb338d481bb3468c2bd7e0f043b"} Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.692986 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"1da2c6bad6dc7d1e0304a6f709d56a27fd4685008471f83944025d97b01250eb"} Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.692995 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"df7b8400-95d5-481a-a9a1-d5b2586f159f","Type":"ContainerStarted","Data":"fe17c475b589cc7120c74281a56ff763d5521740cd0fc2540e943a5b812d24ed"} Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.694669 4832 generic.go:334] "Generic (PLEG): container finished" podID="dd676a64-dc24-42de-9103-9e9d58390f23" containerID="483d738e594cafb67754d92264e109dc765b69da20bca45b67008040827736ed" exitCode=0 Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.694919 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-88a1-account-create-hdt2w" event={"ID":"dd676a64-dc24-42de-9103-9e9d58390f23","Type":"ContainerDied","Data":"483d738e594cafb67754d92264e109dc765b69da20bca45b67008040827736ed"} Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.761771 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=35.827113918 podStartE2EDuration="57.761751794s" podCreationTimestamp="2025-10-02 18:40:37 +0000 UTC" firstStartedPulling="2025-10-02 18:41:11.093940649 +0000 UTC m=+1228.063383521" lastFinishedPulling="2025-10-02 18:41:33.028578515 +0000 UTC m=+1249.998021397" observedRunningTime="2025-10-02 18:41:34.725569734 +0000 UTC m=+1251.695012596" watchObservedRunningTime="2025-10-02 18:41:34.761751794 +0000 UTC m=+1251.731194666" Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.784055 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-29f5-account-create-jjqdt"] Oct 02 18:41:34 crc kubenswrapper[4832]: W1002 18:41:34.786948 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55480b52_9d4b_4b5a_a8b0_4235287fc493.slice/crio-08be6b45f159012c5481f2509b75bcf5f05139cf13d237ac5702b8d3f1f255cb WatchSource:0}: Error finding container 08be6b45f159012c5481f2509b75bcf5f05139cf13d237ac5702b8d3f1f255cb: Status 404 returned error can't find the container with id 08be6b45f159012c5481f2509b75bcf5f05139cf13d237ac5702b8d3f1f255cb Oct 02 18:41:34 crc kubenswrapper[4832]: I1002 18:41:34.933668 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0640-account-create-l2rlm"] Oct 02 18:41:34 crc kubenswrapper[4832]: W1002 18:41:34.947922 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod439d6007_1cf6_40ac_9cff_272b66972580.slice/crio-6874720f6a100d8d44ebc50c271e695d987553e2fc9ca02c4b22740b13afc672 WatchSource:0}: Error finding container 6874720f6a100d8d44ebc50c271e695d987553e2fc9ca02c4b22740b13afc672: Status 404 returned error can't find the container with id 6874720f6a100d8d44ebc50c271e695d987553e2fc9ca02c4b22740b13afc672 Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.004460 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-46rfg"] Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.006023 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.009499 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.023990 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-46rfg"] Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.073126 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk2vk\" (UniqueName: \"kubernetes.io/projected/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-kube-api-access-sk2vk\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.073211 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.073271 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.073321 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-config\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.073411 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.073448 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.175909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk2vk\" (UniqueName: \"kubernetes.io/projected/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-kube-api-access-sk2vk\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.176306 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.176365 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.176396 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-config\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.176482 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.176519 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.177595 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.178562 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.179034 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-config\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.179253 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-dns-svc\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.179714 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.213929 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk2vk\" (UniqueName: \"kubernetes.io/projected/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-kube-api-access-sk2vk\") pod \"dnsmasq-dns-764c5664d7-46rfg\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.397249 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.705298 4832 generic.go:334] "Generic (PLEG): container finished" podID="439d6007-1cf6-40ac-9cff-272b66972580" containerID="a02f592a27434bb3cf3c09f7e69f49a25273a4b763caa36b8067c0f39c9b8cf5" exitCode=0 Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.705400 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0640-account-create-l2rlm" event={"ID":"439d6007-1cf6-40ac-9cff-272b66972580","Type":"ContainerDied","Data":"a02f592a27434bb3cf3c09f7e69f49a25273a4b763caa36b8067c0f39c9b8cf5"} Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.705431 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0640-account-create-l2rlm" event={"ID":"439d6007-1cf6-40ac-9cff-272b66972580","Type":"ContainerStarted","Data":"6874720f6a100d8d44ebc50c271e695d987553e2fc9ca02c4b22740b13afc672"} Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.707309 4832 generic.go:334] "Generic (PLEG): container finished" podID="55480b52-9d4b-4b5a-a8b0-4235287fc493" containerID="638211d2af7d4d1a57ea2295ee30ddc19e60a9032fc0cb56d7094450b74be015" exitCode=0 Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.707522 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-29f5-account-create-jjqdt" event={"ID":"55480b52-9d4b-4b5a-a8b0-4235287fc493","Type":"ContainerDied","Data":"638211d2af7d4d1a57ea2295ee30ddc19e60a9032fc0cb56d7094450b74be015"} Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.707548 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-29f5-account-create-jjqdt" event={"ID":"55480b52-9d4b-4b5a-a8b0-4235287fc493","Type":"ContainerStarted","Data":"08be6b45f159012c5481f2509b75bcf5f05139cf13d237ac5702b8d3f1f255cb"} Oct 02 18:41:35 crc kubenswrapper[4832]: I1002 18:41:35.943420 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-46rfg"] Oct 02 18:41:35 crc kubenswrapper[4832]: W1002 18:41:35.974165 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod454b7082_88ef_4ac0_a0d8_cd4582c4f84d.slice/crio-e9b6d34432c5df228cfcc91ca229aa02479aac523a7622f0c4e1065e6e40bbfe WatchSource:0}: Error finding container e9b6d34432c5df228cfcc91ca229aa02479aac523a7622f0c4e1065e6e40bbfe: Status 404 returned error can't find the container with id e9b6d34432c5df228cfcc91ca229aa02479aac523a7622f0c4e1065e6e40bbfe Oct 02 18:41:36 crc kubenswrapper[4832]: I1002 18:41:36.449337 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-88a1-account-create-hdt2w" Oct 02 18:41:36 crc kubenswrapper[4832]: I1002 18:41:36.600256 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4gnh\" (UniqueName: \"kubernetes.io/projected/dd676a64-dc24-42de-9103-9e9d58390f23-kube-api-access-x4gnh\") pod \"dd676a64-dc24-42de-9103-9e9d58390f23\" (UID: \"dd676a64-dc24-42de-9103-9e9d58390f23\") " Oct 02 18:41:36 crc kubenswrapper[4832]: I1002 18:41:36.605752 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd676a64-dc24-42de-9103-9e9d58390f23-kube-api-access-x4gnh" (OuterVolumeSpecName: "kube-api-access-x4gnh") pod "dd676a64-dc24-42de-9103-9e9d58390f23" (UID: "dd676a64-dc24-42de-9103-9e9d58390f23"). InnerVolumeSpecName "kube-api-access-x4gnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:36 crc kubenswrapper[4832]: I1002 18:41:36.702109 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4gnh\" (UniqueName: \"kubernetes.io/projected/dd676a64-dc24-42de-9103-9e9d58390f23-kube-api-access-x4gnh\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:36 crc kubenswrapper[4832]: I1002 18:41:36.717999 4832 generic.go:334] "Generic (PLEG): container finished" podID="454b7082-88ef-4ac0-a0d8-cd4582c4f84d" containerID="afaad9bcf879206db1487217719fd9a13964c1a68086a1219a2d0917ec305dfa" exitCode=0 Oct 02 18:41:36 crc kubenswrapper[4832]: I1002 18:41:36.718092 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-46rfg" event={"ID":"454b7082-88ef-4ac0-a0d8-cd4582c4f84d","Type":"ContainerDied","Data":"afaad9bcf879206db1487217719fd9a13964c1a68086a1219a2d0917ec305dfa"} Oct 02 18:41:36 crc kubenswrapper[4832]: I1002 18:41:36.718134 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-46rfg" event={"ID":"454b7082-88ef-4ac0-a0d8-cd4582c4f84d","Type":"ContainerStarted","Data":"e9b6d34432c5df228cfcc91ca229aa02479aac523a7622f0c4e1065e6e40bbfe"} Oct 02 18:41:36 crc kubenswrapper[4832]: I1002 18:41:36.720613 4832 generic.go:334] "Generic (PLEG): container finished" podID="091b8e1f-4994-4bc6-8be4-c5a44668e088" containerID="3b118ebdc244e7e18afc8e6a8d8231e6bcd72e28a64667dc9d1dd42669f4ada0" exitCode=0 Oct 02 18:41:36 crc kubenswrapper[4832]: I1002 18:41:36.720697 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"091b8e1f-4994-4bc6-8be4-c5a44668e088","Type":"ContainerDied","Data":"3b118ebdc244e7e18afc8e6a8d8231e6bcd72e28a64667dc9d1dd42669f4ada0"} Oct 02 18:41:36 crc kubenswrapper[4832]: I1002 18:41:36.729437 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-88a1-account-create-hdt2w" Oct 02 18:41:36 crc kubenswrapper[4832]: I1002 18:41:36.730357 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-88a1-account-create-hdt2w" event={"ID":"dd676a64-dc24-42de-9103-9e9d58390f23","Type":"ContainerDied","Data":"6f8c5e99aaf142fd02d1c4f4b4517dbd8d93a64a8de8beaf442f3c072751db99"} Oct 02 18:41:36 crc kubenswrapper[4832]: I1002 18:41:36.730418 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f8c5e99aaf142fd02d1c4f4b4517dbd8d93a64a8de8beaf442f3c072751db99" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.371137 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-29f5-account-create-jjqdt" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.448215 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0640-account-create-l2rlm" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.519574 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54r6g\" (UniqueName: \"kubernetes.io/projected/55480b52-9d4b-4b5a-a8b0-4235287fc493-kube-api-access-54r6g\") pod \"55480b52-9d4b-4b5a-a8b0-4235287fc493\" (UID: \"55480b52-9d4b-4b5a-a8b0-4235287fc493\") " Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.519682 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwx9b\" (UniqueName: \"kubernetes.io/projected/439d6007-1cf6-40ac-9cff-272b66972580-kube-api-access-cwx9b\") pod \"439d6007-1cf6-40ac-9cff-272b66972580\" (UID: \"439d6007-1cf6-40ac-9cff-272b66972580\") " Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.524419 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/439d6007-1cf6-40ac-9cff-272b66972580-kube-api-access-cwx9b" (OuterVolumeSpecName: "kube-api-access-cwx9b") pod "439d6007-1cf6-40ac-9cff-272b66972580" (UID: "439d6007-1cf6-40ac-9cff-272b66972580"). InnerVolumeSpecName "kube-api-access-cwx9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.524557 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55480b52-9d4b-4b5a-a8b0-4235287fc493-kube-api-access-54r6g" (OuterVolumeSpecName: "kube-api-access-54r6g") pod "55480b52-9d4b-4b5a-a8b0-4235287fc493" (UID: "55480b52-9d4b-4b5a-a8b0-4235287fc493"). InnerVolumeSpecName "kube-api-access-54r6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.621782 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwx9b\" (UniqueName: \"kubernetes.io/projected/439d6007-1cf6-40ac-9cff-272b66972580-kube-api-access-cwx9b\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.622266 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54r6g\" (UniqueName: \"kubernetes.io/projected/55480b52-9d4b-4b5a-a8b0-4235287fc493-kube-api-access-54r6g\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.641269 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Oct 02 18:41:37 crc kubenswrapper[4832]: E1002 18:41:37.642061 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd676a64-dc24-42de-9103-9e9d58390f23" containerName="mariadb-account-create" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.642081 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd676a64-dc24-42de-9103-9e9d58390f23" containerName="mariadb-account-create" Oct 02 18:41:37 crc kubenswrapper[4832]: E1002 18:41:37.642098 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439d6007-1cf6-40ac-9cff-272b66972580" containerName="mariadb-account-create" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.642107 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="439d6007-1cf6-40ac-9cff-272b66972580" containerName="mariadb-account-create" Oct 02 18:41:37 crc kubenswrapper[4832]: E1002 18:41:37.642131 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55480b52-9d4b-4b5a-a8b0-4235287fc493" containerName="mariadb-account-create" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.642139 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="55480b52-9d4b-4b5a-a8b0-4235287fc493" containerName="mariadb-account-create" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.642409 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="55480b52-9d4b-4b5a-a8b0-4235287fc493" containerName="mariadb-account-create" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.642434 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd676a64-dc24-42de-9103-9e9d58390f23" containerName="mariadb-account-create" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.642452 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="439d6007-1cf6-40ac-9cff-272b66972580" containerName="mariadb-account-create" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.643705 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.646718 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.657794 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.724387 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b1e748-a409-4bf1-b790-7517f2dfdfe4-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\") " pod="openstack/mysqld-exporter-0" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.724492 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b1e748-a409-4bf1-b790-7517f2dfdfe4-config-data\") pod \"mysqld-exporter-0\" (UID: \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\") " pod="openstack/mysqld-exporter-0" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.724564 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5c5c\" (UniqueName: \"kubernetes.io/projected/04b1e748-a409-4bf1-b790-7517f2dfdfe4-kube-api-access-r5c5c\") pod \"mysqld-exporter-0\" (UID: \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\") " pod="openstack/mysqld-exporter-0" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.756316 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-29f5-account-create-jjqdt" event={"ID":"55480b52-9d4b-4b5a-a8b0-4235287fc493","Type":"ContainerDied","Data":"08be6b45f159012c5481f2509b75bcf5f05139cf13d237ac5702b8d3f1f255cb"} Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.756384 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08be6b45f159012c5481f2509b75bcf5f05139cf13d237ac5702b8d3f1f255cb" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.756461 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-29f5-account-create-jjqdt" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.758421 4832 generic.go:334] "Generic (PLEG): container finished" podID="373e2d35-0357-4de4-9315-58efaca557f9" containerID="68bfd7208908963d522c5060d2c22b778f9177768f7999cebeceadeb768b85b8" exitCode=0 Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.758510 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vn8w7" event={"ID":"373e2d35-0357-4de4-9315-58efaca557f9","Type":"ContainerDied","Data":"68bfd7208908963d522c5060d2c22b778f9177768f7999cebeceadeb768b85b8"} Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.780040 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-46rfg" event={"ID":"454b7082-88ef-4ac0-a0d8-cd4582c4f84d","Type":"ContainerStarted","Data":"0bcbe956a355f37a3fb64d6c661f8984e69ca1a1c33d64812e8b2c7a7b141338"} Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.812537 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"091b8e1f-4994-4bc6-8be4-c5a44668e088","Type":"ContainerStarted","Data":"c5900605e107bac3545e45e3522fac4d4bec1f2de961a2ca4059ba98982e6f86"} Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.815438 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-46rfg" podStartSLOduration=3.815421246 podStartE2EDuration="3.815421246s" podCreationTimestamp="2025-10-02 18:41:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:41:37.812439294 +0000 UTC m=+1254.781882186" watchObservedRunningTime="2025-10-02 18:41:37.815421246 +0000 UTC m=+1254.784864118" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.826190 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b1e748-a409-4bf1-b790-7517f2dfdfe4-config-data\") pod \"mysqld-exporter-0\" (UID: \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\") " pod="openstack/mysqld-exporter-0" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.826285 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5c5c\" (UniqueName: \"kubernetes.io/projected/04b1e748-a409-4bf1-b790-7517f2dfdfe4-kube-api-access-r5c5c\") pod \"mysqld-exporter-0\" (UID: \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\") " pod="openstack/mysqld-exporter-0" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.826398 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b1e748-a409-4bf1-b790-7517f2dfdfe4-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\") " pod="openstack/mysqld-exporter-0" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.827096 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0640-account-create-l2rlm" event={"ID":"439d6007-1cf6-40ac-9cff-272b66972580","Type":"ContainerDied","Data":"6874720f6a100d8d44ebc50c271e695d987553e2fc9ca02c4b22740b13afc672"} Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.827132 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6874720f6a100d8d44ebc50c271e695d987553e2fc9ca02c4b22740b13afc672" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.827141 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0640-account-create-l2rlm" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.828732 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-665w5" event={"ID":"c83e9ef5-26f5-4ec5-b70c-c28549d863f6","Type":"ContainerStarted","Data":"358dbc395ecb88f1f3493a001f23f1722c287e588d7907136af4d48bd40aa9c0"} Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.841378 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b1e748-a409-4bf1-b790-7517f2dfdfe4-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\") " pod="openstack/mysqld-exporter-0" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.851738 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b1e748-a409-4bf1-b790-7517f2dfdfe4-config-data\") pod \"mysqld-exporter-0\" (UID: \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\") " pod="openstack/mysqld-exporter-0" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.856085 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5c5c\" (UniqueName: \"kubernetes.io/projected/04b1e748-a409-4bf1-b790-7517f2dfdfe4-kube-api-access-r5c5c\") pod \"mysqld-exporter-0\" (UID: \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\") " pod="openstack/mysqld-exporter-0" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.869671 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-665w5" podStartSLOduration=2.902120521 podStartE2EDuration="32.86965423s" podCreationTimestamp="2025-10-02 18:41:05 +0000 UTC" firstStartedPulling="2025-10-02 18:41:06.723180315 +0000 UTC m=+1223.692623187" lastFinishedPulling="2025-10-02 18:41:36.690714024 +0000 UTC m=+1253.660156896" observedRunningTime="2025-10-02 18:41:37.850761939 +0000 UTC m=+1254.820204811" watchObservedRunningTime="2025-10-02 18:41:37.86965423 +0000 UTC m=+1254.839097102" Oct 02 18:41:37 crc kubenswrapper[4832]: I1002 18:41:37.959432 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Oct 02 18:41:38 crc kubenswrapper[4832]: I1002 18:41:38.477428 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Oct 02 18:41:38 crc kubenswrapper[4832]: I1002 18:41:38.839275 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"04b1e748-a409-4bf1-b790-7517f2dfdfe4","Type":"ContainerStarted","Data":"cb5e0f44139fc0de1b4d35959c797091c34ffc163a8cb3ee8782832d3197cfcb"} Oct 02 18:41:38 crc kubenswrapper[4832]: I1002 18:41:38.839700 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:39 crc kubenswrapper[4832]: I1002 18:41:39.671343 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vn8w7" Oct 02 18:41:39 crc kubenswrapper[4832]: I1002 18:41:39.764974 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt52p\" (UniqueName: \"kubernetes.io/projected/373e2d35-0357-4de4-9315-58efaca557f9-kube-api-access-vt52p\") pod \"373e2d35-0357-4de4-9315-58efaca557f9\" (UID: \"373e2d35-0357-4de4-9315-58efaca557f9\") " Oct 02 18:41:39 crc kubenswrapper[4832]: I1002 18:41:39.765289 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e2d35-0357-4de4-9315-58efaca557f9-config-data\") pod \"373e2d35-0357-4de4-9315-58efaca557f9\" (UID: \"373e2d35-0357-4de4-9315-58efaca557f9\") " Oct 02 18:41:39 crc kubenswrapper[4832]: I1002 18:41:39.765613 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e2d35-0357-4de4-9315-58efaca557f9-combined-ca-bundle\") pod \"373e2d35-0357-4de4-9315-58efaca557f9\" (UID: \"373e2d35-0357-4de4-9315-58efaca557f9\") " Oct 02 18:41:39 crc kubenswrapper[4832]: I1002 18:41:39.852982 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vn8w7" Oct 02 18:41:39 crc kubenswrapper[4832]: I1002 18:41:39.854144 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vn8w7" event={"ID":"373e2d35-0357-4de4-9315-58efaca557f9","Type":"ContainerDied","Data":"a7edeb7092d100786f62116ecf198b9b64b9b7009280036f10626c98d9f4fd52"} Oct 02 18:41:39 crc kubenswrapper[4832]: I1002 18:41:39.854247 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7edeb7092d100786f62116ecf198b9b64b9b7009280036f10626c98d9f4fd52" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.005042 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-46rfg"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.019225 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ztzdn"] Oct 02 18:41:40 crc kubenswrapper[4832]: E1002 18:41:40.019673 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373e2d35-0357-4de4-9315-58efaca557f9" containerName="keystone-db-sync" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.019690 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="373e2d35-0357-4de4-9315-58efaca557f9" containerName="keystone-db-sync" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.019911 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="373e2d35-0357-4de4-9315-58efaca557f9" containerName="keystone-db-sync" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.020582 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.031388 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ztzdn"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.041895 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-6dmzd"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.043751 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.052460 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-6dmzd"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.071545 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-fernet-keys\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.071598 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-combined-ca-bundle\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.071638 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-config-data\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.071871 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-credential-keys\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.071934 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zbq5\" (UniqueName: \"kubernetes.io/projected/33cc23a9-d9d1-4065-88d3-4450d246b3f6-kube-api-access-7zbq5\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.071971 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-scripts\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.086246 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/373e2d35-0357-4de4-9315-58efaca557f9-kube-api-access-vt52p" (OuterVolumeSpecName: "kube-api-access-vt52p") pod "373e2d35-0357-4de4-9315-58efaca557f9" (UID: "373e2d35-0357-4de4-9315-58efaca557f9"). InnerVolumeSpecName "kube-api-access-vt52p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.105708 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-dx58r"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.107417 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dx58r" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.112021 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.112221 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-74z78" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.112460 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-dx58r"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.164561 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/373e2d35-0357-4de4-9315-58efaca557f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "373e2d35-0357-4de4-9315-58efaca557f9" (UID: "373e2d35-0357-4de4-9315-58efaca557f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177418 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177466 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-combined-ca-bundle\") pod \"heat-db-sync-dx58r\" (UID: \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\") " pod="openstack/heat-db-sync-dx58r" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177486 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-config-data\") pod \"heat-db-sync-dx58r\" (UID: \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\") " pod="openstack/heat-db-sync-dx58r" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177514 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177549 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-dns-svc\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177596 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-fernet-keys\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177622 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9drd\" (UniqueName: \"kubernetes.io/projected/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-kube-api-access-b9drd\") pod \"heat-db-sync-dx58r\" (UID: \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\") " pod="openstack/heat-db-sync-dx58r" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177644 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-combined-ca-bundle\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177664 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8lpw\" (UniqueName: \"kubernetes.io/projected/4f438e78-7ce7-482f-b0a4-8963181b7964-kube-api-access-h8lpw\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177680 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-config-data\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177711 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177730 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-config\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177765 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-credential-keys\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177781 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zbq5\" (UniqueName: \"kubernetes.io/projected/33cc23a9-d9d1-4065-88d3-4450d246b3f6-kube-api-access-7zbq5\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177802 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-scripts\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177866 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/373e2d35-0357-4de4-9315-58efaca557f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.177877 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt52p\" (UniqueName: \"kubernetes.io/projected/373e2d35-0357-4de4-9315-58efaca557f9-kube-api-access-vt52p\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.190779 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-scripts\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.193871 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-combined-ca-bundle\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.196959 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-fernet-keys\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.197715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-config-data\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.201742 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-credential-keys\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.219575 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zbq5\" (UniqueName: \"kubernetes.io/projected/33cc23a9-d9d1-4065-88d3-4450d246b3f6-kube-api-access-7zbq5\") pod \"keystone-bootstrap-ztzdn\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.278074 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-h97bp"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.279770 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-h97bp" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.280930 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.281009 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-dns-svc\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.281105 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9drd\" (UniqueName: \"kubernetes.io/projected/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-kube-api-access-b9drd\") pod \"heat-db-sync-dx58r\" (UID: \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\") " pod="openstack/heat-db-sync-dx58r" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.281142 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8lpw\" (UniqueName: \"kubernetes.io/projected/4f438e78-7ce7-482f-b0a4-8963181b7964-kube-api-access-h8lpw\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.281188 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.281214 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-config\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.281311 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.281329 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-combined-ca-bundle\") pod \"heat-db-sync-dx58r\" (UID: \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\") " pod="openstack/heat-db-sync-dx58r" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.281343 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-config-data\") pod \"heat-db-sync-dx58r\" (UID: \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\") " pod="openstack/heat-db-sync-dx58r" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.283994 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-dns-svc\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.284732 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-config\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.289328 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.290412 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/373e2d35-0357-4de4-9315-58efaca557f9-config-data" (OuterVolumeSpecName: "config-data") pod "373e2d35-0357-4de4-9315-58efaca557f9" (UID: "373e2d35-0357-4de4-9315-58efaca557f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.290561 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.290630 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.290867 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7dgdj" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.291755 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.295180 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.305549 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-config-data\") pod \"heat-db-sync-dx58r\" (UID: \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\") " pod="openstack/heat-db-sync-dx58r" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.313898 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-h97bp"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.327911 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-combined-ca-bundle\") pod \"heat-db-sync-dx58r\" (UID: \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\") " pod="openstack/heat-db-sync-dx58r" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.338842 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9drd\" (UniqueName: \"kubernetes.io/projected/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-kube-api-access-b9drd\") pod \"heat-db-sync-dx58r\" (UID: \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\") " pod="openstack/heat-db-sync-dx58r" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.343708 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.346861 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8lpw\" (UniqueName: \"kubernetes.io/projected/4f438e78-7ce7-482f-b0a4-8963181b7964-kube-api-access-h8lpw\") pod \"dnsmasq-dns-5959f8865f-6dmzd\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.370546 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.385336 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l55bk\" (UniqueName: \"kubernetes.io/projected/77fb37d1-dfa6-4ade-9bad-6263a7f22277-kube-api-access-l55bk\") pod \"neutron-db-sync-h97bp\" (UID: \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\") " pod="openstack/neutron-db-sync-h97bp" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.385414 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/77fb37d1-dfa6-4ade-9bad-6263a7f22277-config\") pod \"neutron-db-sync-h97bp\" (UID: \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\") " pod="openstack/neutron-db-sync-h97bp" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.385496 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fb37d1-dfa6-4ade-9bad-6263a7f22277-combined-ca-bundle\") pod \"neutron-db-sync-h97bp\" (UID: \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\") " pod="openstack/neutron-db-sync-h97bp" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.385630 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/373e2d35-0357-4de4-9315-58efaca557f9-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.392708 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-lrhfz"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.394202 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.397941 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.398168 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.402677 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9vvrr" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.414543 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dx58r" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.488278 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l55bk\" (UniqueName: \"kubernetes.io/projected/77fb37d1-dfa6-4ade-9bad-6263a7f22277-kube-api-access-l55bk\") pod \"neutron-db-sync-h97bp\" (UID: \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\") " pod="openstack/neutron-db-sync-h97bp" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.488361 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfx7h\" (UniqueName: \"kubernetes.io/projected/78fb0cc0-e570-437c-b527-c925ff84070a-kube-api-access-pfx7h\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.488404 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/77fb37d1-dfa6-4ade-9bad-6263a7f22277-config\") pod \"neutron-db-sync-h97bp\" (UID: \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\") " pod="openstack/neutron-db-sync-h97bp" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.488496 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-scripts\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.488531 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fb0cc0-e570-437c-b527-c925ff84070a-logs\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.488562 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-config-data\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.488579 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-combined-ca-bundle\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.488632 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fb37d1-dfa6-4ade-9bad-6263a7f22277-combined-ca-bundle\") pod \"neutron-db-sync-h97bp\" (UID: \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\") " pod="openstack/neutron-db-sync-h97bp" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.490677 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-6dmzd"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.493457 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fb37d1-dfa6-4ade-9bad-6263a7f22277-combined-ca-bundle\") pod \"neutron-db-sync-h97bp\" (UID: \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\") " pod="openstack/neutron-db-sync-h97bp" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.496125 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/77fb37d1-dfa6-4ade-9bad-6263a7f22277-config\") pod \"neutron-db-sync-h97bp\" (UID: \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\") " pod="openstack/neutron-db-sync-h97bp" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.506357 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lrhfz"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.507892 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l55bk\" (UniqueName: \"kubernetes.io/projected/77fb37d1-dfa6-4ade-9bad-6263a7f22277-kube-api-access-l55bk\") pod \"neutron-db-sync-h97bp\" (UID: \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\") " pod="openstack/neutron-db-sync-h97bp" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.521468 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-xft64"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.523465 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.545089 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-xft64"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.590889 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-config\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.590948 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.590999 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmv62\" (UniqueName: \"kubernetes.io/projected/b5f9b41f-3101-4516-99bd-1612910e0e3c-kube-api-access-lmv62\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.591035 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.591055 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfx7h\" (UniqueName: \"kubernetes.io/projected/78fb0cc0-e570-437c-b527-c925ff84070a-kube-api-access-pfx7h\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.591099 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-scripts\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.591117 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.591138 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fb0cc0-e570-437c-b527-c925ff84070a-logs\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.591167 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-config-data\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.591184 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-combined-ca-bundle\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.591219 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.593451 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fb0cc0-e570-437c-b527-c925ff84070a-logs\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.595625 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-scripts\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.595761 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-config-data\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.596216 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-combined-ca-bundle\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.613701 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.618084 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.621512 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfx7h\" (UniqueName: \"kubernetes.io/projected/78fb0cc0-e570-437c-b527-c925ff84070a-kube-api-access-pfx7h\") pod \"placement-db-sync-lrhfz\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.624538 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.624686 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.624742 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.692630 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.692682 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.692724 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x64sv\" (UniqueName: \"kubernetes.io/projected/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-kube-api-access-x64sv\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.692765 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.692843 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-config\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.693210 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.693247 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.693302 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-scripts\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.693320 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-config-data\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.693343 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmv62\" (UniqueName: \"kubernetes.io/projected/b5f9b41f-3101-4516-99bd-1612910e0e3c-kube-api-access-lmv62\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.693366 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-log-httpd\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.693390 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-run-httpd\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.693417 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.696113 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.697650 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.698528 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-config\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.698771 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.699066 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.720110 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmv62\" (UniqueName: \"kubernetes.io/projected/b5f9b41f-3101-4516-99bd-1612910e0e3c-kube-api-access-lmv62\") pod \"dnsmasq-dns-58dd9ff6bc-xft64\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.739623 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-h97bp" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.748007 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lrhfz" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.795556 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.798568 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-config-data\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.798613 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-scripts\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.798658 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-log-httpd\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.798688 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-run-httpd\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.798837 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.798910 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x64sv\" (UniqueName: \"kubernetes.io/projected/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-kube-api-access-x64sv\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.799874 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-log-httpd\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.800505 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-run-httpd\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.802854 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.803163 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.803740 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-config-data\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.804105 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-scripts\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.814998 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x64sv\" (UniqueName: \"kubernetes.io/projected/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-kube-api-access-x64sv\") pod \"ceilometer-0\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " pod="openstack/ceilometer-0" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.844792 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.871169 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"091b8e1f-4994-4bc6-8be4-c5a44668e088","Type":"ContainerStarted","Data":"a0454bf244d26038594b93de2b9c171a2b977eba19ca225009b374c27ca285af"} Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.871361 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-46rfg" podUID="454b7082-88ef-4ac0-a0d8-cd4582c4f84d" containerName="dnsmasq-dns" containerID="cri-o://0bcbe956a355f37a3fb64d6c661f8984e69ca1a1c33d64812e8b2c7a7b141338" gracePeriod=10 Oct 02 18:41:40 crc kubenswrapper[4832]: I1002 18:41:40.965409 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.864055 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.920646 4832 generic.go:334] "Generic (PLEG): container finished" podID="454b7082-88ef-4ac0-a0d8-cd4582c4f84d" containerID="0bcbe956a355f37a3fb64d6c661f8984e69ca1a1c33d64812e8b2c7a7b141338" exitCode=0 Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.920805 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-46rfg" event={"ID":"454b7082-88ef-4ac0-a0d8-cd4582c4f84d","Type":"ContainerDied","Data":"0bcbe956a355f37a3fb64d6c661f8984e69ca1a1c33d64812e8b2c7a7b141338"} Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.920999 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-46rfg" event={"ID":"454b7082-88ef-4ac0-a0d8-cd4582c4f84d","Type":"ContainerDied","Data":"e9b6d34432c5df228cfcc91ca229aa02479aac523a7622f0c4e1065e6e40bbfe"} Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.921028 4832 scope.go:117] "RemoveContainer" containerID="0bcbe956a355f37a3fb64d6c661f8984e69ca1a1c33d64812e8b2c7a7b141338" Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.920880 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-46rfg" Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.932535 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk2vk\" (UniqueName: \"kubernetes.io/projected/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-kube-api-access-sk2vk\") pod \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.932597 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-config\") pod \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.932742 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-ovsdbserver-sb\") pod \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.932870 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-dns-swift-storage-0\") pod \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.932927 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-ovsdbserver-nb\") pod \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.932953 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-dns-svc\") pod \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\" (UID: \"454b7082-88ef-4ac0-a0d8-cd4582c4f84d\") " Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.942947 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-kube-api-access-sk2vk" (OuterVolumeSpecName: "kube-api-access-sk2vk") pod "454b7082-88ef-4ac0-a0d8-cd4582c4f84d" (UID: "454b7082-88ef-4ac0-a0d8-cd4582c4f84d"). InnerVolumeSpecName "kube-api-access-sk2vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.944928 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"091b8e1f-4994-4bc6-8be4-c5a44668e088","Type":"ContainerStarted","Data":"4b1280ec579a80d79e81c1945dec4db21140ea6adfeddcc49264fc9c905e7c5e"} Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.962226 4832 scope.go:117] "RemoveContainer" containerID="afaad9bcf879206db1487217719fd9a13964c1a68086a1219a2d0917ec305dfa" Oct 02 18:41:41 crc kubenswrapper[4832]: I1002 18:41:41.997210 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.997186138 podStartE2EDuration="17.997186138s" podCreationTimestamp="2025-10-02 18:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:41:41.979082352 +0000 UTC m=+1258.948525224" watchObservedRunningTime="2025-10-02 18:41:41.997186138 +0000 UTC m=+1258.966629010" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.009521 4832 scope.go:117] "RemoveContainer" containerID="0bcbe956a355f37a3fb64d6c661f8984e69ca1a1c33d64812e8b2c7a7b141338" Oct 02 18:41:42 crc kubenswrapper[4832]: E1002 18:41:42.010016 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bcbe956a355f37a3fb64d6c661f8984e69ca1a1c33d64812e8b2c7a7b141338\": container with ID starting with 0bcbe956a355f37a3fb64d6c661f8984e69ca1a1c33d64812e8b2c7a7b141338 not found: ID does not exist" containerID="0bcbe956a355f37a3fb64d6c661f8984e69ca1a1c33d64812e8b2c7a7b141338" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.010075 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bcbe956a355f37a3fb64d6c661f8984e69ca1a1c33d64812e8b2c7a7b141338"} err="failed to get container status \"0bcbe956a355f37a3fb64d6c661f8984e69ca1a1c33d64812e8b2c7a7b141338\": rpc error: code = NotFound desc = could not find container \"0bcbe956a355f37a3fb64d6c661f8984e69ca1a1c33d64812e8b2c7a7b141338\": container with ID starting with 0bcbe956a355f37a3fb64d6c661f8984e69ca1a1c33d64812e8b2c7a7b141338 not found: ID does not exist" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.010149 4832 scope.go:117] "RemoveContainer" containerID="afaad9bcf879206db1487217719fd9a13964c1a68086a1219a2d0917ec305dfa" Oct 02 18:41:42 crc kubenswrapper[4832]: E1002 18:41:42.010599 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afaad9bcf879206db1487217719fd9a13964c1a68086a1219a2d0917ec305dfa\": container with ID starting with afaad9bcf879206db1487217719fd9a13964c1a68086a1219a2d0917ec305dfa not found: ID does not exist" containerID="afaad9bcf879206db1487217719fd9a13964c1a68086a1219a2d0917ec305dfa" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.010640 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afaad9bcf879206db1487217719fd9a13964c1a68086a1219a2d0917ec305dfa"} err="failed to get container status \"afaad9bcf879206db1487217719fd9a13964c1a68086a1219a2d0917ec305dfa\": rpc error: code = NotFound desc = could not find container \"afaad9bcf879206db1487217719fd9a13964c1a68086a1219a2d0917ec305dfa\": container with ID starting with afaad9bcf879206db1487217719fd9a13964c1a68086a1219a2d0917ec305dfa not found: ID does not exist" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.024056 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "454b7082-88ef-4ac0-a0d8-cd4582c4f84d" (UID: "454b7082-88ef-4ac0-a0d8-cd4582c4f84d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.025694 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "454b7082-88ef-4ac0-a0d8-cd4582c4f84d" (UID: "454b7082-88ef-4ac0-a0d8-cd4582c4f84d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.030806 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "454b7082-88ef-4ac0-a0d8-cd4582c4f84d" (UID: "454b7082-88ef-4ac0-a0d8-cd4582c4f84d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.043191 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk2vk\" (UniqueName: \"kubernetes.io/projected/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-kube-api-access-sk2vk\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.043226 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.043235 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.043244 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.047191 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-config" (OuterVolumeSpecName: "config") pod "454b7082-88ef-4ac0-a0d8-cd4582c4f84d" (UID: "454b7082-88ef-4ac0-a0d8-cd4582c4f84d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.125610 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "454b7082-88ef-4ac0-a0d8-cd4582c4f84d" (UID: "454b7082-88ef-4ac0-a0d8-cd4582c4f84d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.146456 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.146491 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/454b7082-88ef-4ac0-a0d8-cd4582c4f84d-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.275341 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-46rfg"] Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.286843 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ztzdn"] Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.297247 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-46rfg"] Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.910435 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-xft64"] Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.919303 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-6dmzd"] Oct 02 18:41:42 crc kubenswrapper[4832]: W1002 18:41:42.924185 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3df4096_7c1e_4b8e_bdc8_23bfcbe6e7c4.slice/crio-7363c923cc38a5f4279c0ae84900b4c94ee05528ce20d31760878e4e43b8cca4 WatchSource:0}: Error finding container 7363c923cc38a5f4279c0ae84900b4c94ee05528ce20d31760878e4e43b8cca4: Status 404 returned error can't find the container with id 7363c923cc38a5f4279c0ae84900b4c94ee05528ce20d31760878e4e43b8cca4 Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.934595 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-dx58r"] Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.945821 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-h97bp"] Oct 02 18:41:42 crc kubenswrapper[4832]: W1002 18:41:42.948899 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77fb37d1_dfa6_4ade_9bad_6263a7f22277.slice/crio-a61d4f9bc2362c20c2262ca59477c10676314611818d7a803469de8a4ab26434 WatchSource:0}: Error finding container a61d4f9bc2362c20c2262ca59477c10676314611818d7a803469de8a4ab26434: Status 404 returned error can't find the container with id a61d4f9bc2362c20c2262ca59477c10676314611818d7a803469de8a4ab26434 Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.970486 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dx58r" event={"ID":"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4","Type":"ContainerStarted","Data":"7363c923cc38a5f4279c0ae84900b4c94ee05528ce20d31760878e4e43b8cca4"} Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.979043 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ztzdn" event={"ID":"33cc23a9-d9d1-4065-88d3-4450d246b3f6","Type":"ContainerStarted","Data":"02b35e9bd924e57a9787f479171ed3c55be531cce04acad93ea397e0ab01912e"} Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.979079 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ztzdn" event={"ID":"33cc23a9-d9d1-4065-88d3-4450d246b3f6","Type":"ContainerStarted","Data":"e6a6108d430f644ec4af09e59fcf041c50bf3a732f9389eebb5b3cfa4020de32"} Oct 02 18:41:42 crc kubenswrapper[4832]: I1002 18:41:42.994807 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"04b1e748-a409-4bf1-b790-7517f2dfdfe4","Type":"ContainerStarted","Data":"722b6a31d27e7ef42f4934201e6976d164497cde5b6b5a2619e5997134b96d36"} Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.010361 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" event={"ID":"4f438e78-7ce7-482f-b0a4-8963181b7964","Type":"ContainerStarted","Data":"0b71d37ac121bd622a7b1e547a14a4e0fe4890eafafba56f9c71f6f99e9e9a1d"} Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.014575 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" event={"ID":"b5f9b41f-3101-4516-99bd-1612910e0e3c","Type":"ContainerStarted","Data":"1a2f92613de25f0c256b4b48bdf0bb14df8b269c6455d223f56aac2eefd29fc5"} Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.033230 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ztzdn" podStartSLOduration=4.033113599 podStartE2EDuration="4.033113599s" podCreationTimestamp="2025-10-02 18:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:41:43.007117757 +0000 UTC m=+1259.976560629" watchObservedRunningTime="2025-10-02 18:41:43.033113599 +0000 UTC m=+1260.002556471" Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.052249 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.15415599 podStartE2EDuration="6.052228675s" podCreationTimestamp="2025-10-02 18:41:37 +0000 UTC" firstStartedPulling="2025-10-02 18:41:38.486148156 +0000 UTC m=+1255.455591038" lastFinishedPulling="2025-10-02 18:41:41.384220851 +0000 UTC m=+1258.353663723" observedRunningTime="2025-10-02 18:41:43.02513199 +0000 UTC m=+1259.994574862" watchObservedRunningTime="2025-10-02 18:41:43.052228675 +0000 UTC m=+1260.021671547" Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.186166 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.213904 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-lrhfz"] Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.244770 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="454b7082-88ef-4ac0-a0d8-cd4582c4f84d" path="/var/lib/kubelet/pods/454b7082-88ef-4ac0-a0d8-cd4582c4f84d/volumes" Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.461478 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.939642 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-bae9-account-create-spzmd"] Oct 02 18:41:43 crc kubenswrapper[4832]: E1002 18:41:43.942189 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454b7082-88ef-4ac0-a0d8-cd4582c4f84d" containerName="dnsmasq-dns" Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.942331 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="454b7082-88ef-4ac0-a0d8-cd4582c4f84d" containerName="dnsmasq-dns" Oct 02 18:41:43 crc kubenswrapper[4832]: E1002 18:41:43.942438 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="454b7082-88ef-4ac0-a0d8-cd4582c4f84d" containerName="init" Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.942503 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="454b7082-88ef-4ac0-a0d8-cd4582c4f84d" containerName="init" Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.942835 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="454b7082-88ef-4ac0-a0d8-cd4582c4f84d" containerName="dnsmasq-dns" Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.943709 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-bae9-account-create-spzmd" Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.958887 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 02 18:41:43 crc kubenswrapper[4832]: I1002 18:41:43.966362 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-bae9-account-create-spzmd"] Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.024668 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-bd68-account-create-n5nsg"] Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.026195 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bd68-account-create-n5nsg" Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.032997 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.039158 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-bd68-account-create-n5nsg"] Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.043707 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-h97bp" event={"ID":"77fb37d1-dfa6-4ade-9bad-6263a7f22277","Type":"ContainerStarted","Data":"7d2e251dafc0c96bb3b4fa434da9173febda388b9577597e33fa89faa96abb1d"} Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.043851 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-h97bp" event={"ID":"77fb37d1-dfa6-4ade-9bad-6263a7f22277","Type":"ContainerStarted","Data":"a61d4f9bc2362c20c2262ca59477c10676314611818d7a803469de8a4ab26434"} Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.072068 4832 generic.go:334] "Generic (PLEG): container finished" podID="4f438e78-7ce7-482f-b0a4-8963181b7964" containerID="682a57739d30f814be7a325cc0ce3139af51cc41f1a85f0d5c35a8dd755357e1" exitCode=0 Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.072626 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" event={"ID":"4f438e78-7ce7-482f-b0a4-8963181b7964","Type":"ContainerDied","Data":"682a57739d30f814be7a325cc0ce3139af51cc41f1a85f0d5c35a8dd755357e1"} Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.074006 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-h97bp" podStartSLOduration=4.073988254 podStartE2EDuration="4.073988254s" podCreationTimestamp="2025-10-02 18:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:41:44.073196199 +0000 UTC m=+1261.042639071" watchObservedRunningTime="2025-10-02 18:41:44.073988254 +0000 UTC m=+1261.043431126" Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.104455 4832 generic.go:334] "Generic (PLEG): container finished" podID="b5f9b41f-3101-4516-99bd-1612910e0e3c" containerID="a1e5c61e4fefa0831d3c97b48270c614c9603de91eb3b95bcc1ef636438253a0" exitCode=0 Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.104517 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" event={"ID":"b5f9b41f-3101-4516-99bd-1612910e0e3c","Type":"ContainerDied","Data":"a1e5c61e4fefa0831d3c97b48270c614c9603de91eb3b95bcc1ef636438253a0"} Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.115180 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lrhfz" event={"ID":"78fb0cc0-e570-437c-b527-c925ff84070a","Type":"ContainerStarted","Data":"05a8db1ab65fbc10b777268adabad87957adea27f723d8b823a98977fd70d123"} Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.117300 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5","Type":"ContainerStarted","Data":"7a0b2ab57d13c52b853c18c78b350c77e0cc861f3371ce6bb285bc82760a03c2"} Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.153071 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x995s\" (UniqueName: \"kubernetes.io/projected/e4b44163-e584-4c80-a5b6-2f8d6e3af4e5-kube-api-access-x995s\") pod \"cinder-bae9-account-create-spzmd\" (UID: \"e4b44163-e584-4c80-a5b6-2f8d6e3af4e5\") " pod="openstack/cinder-bae9-account-create-spzmd" Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.154770 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26g2l\" (UniqueName: \"kubernetes.io/projected/b99959e1-bdb1-4c91-9256-56fff1ac186b-kube-api-access-26g2l\") pod \"barbican-bd68-account-create-n5nsg\" (UID: \"b99959e1-bdb1-4c91-9256-56fff1ac186b\") " pod="openstack/barbican-bd68-account-create-n5nsg" Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.256955 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26g2l\" (UniqueName: \"kubernetes.io/projected/b99959e1-bdb1-4c91-9256-56fff1ac186b-kube-api-access-26g2l\") pod \"barbican-bd68-account-create-n5nsg\" (UID: \"b99959e1-bdb1-4c91-9256-56fff1ac186b\") " pod="openstack/barbican-bd68-account-create-n5nsg" Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.257243 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x995s\" (UniqueName: \"kubernetes.io/projected/e4b44163-e584-4c80-a5b6-2f8d6e3af4e5-kube-api-access-x995s\") pod \"cinder-bae9-account-create-spzmd\" (UID: \"e4b44163-e584-4c80-a5b6-2f8d6e3af4e5\") " pod="openstack/cinder-bae9-account-create-spzmd" Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.288179 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26g2l\" (UniqueName: \"kubernetes.io/projected/b99959e1-bdb1-4c91-9256-56fff1ac186b-kube-api-access-26g2l\") pod \"barbican-bd68-account-create-n5nsg\" (UID: \"b99959e1-bdb1-4c91-9256-56fff1ac186b\") " pod="openstack/barbican-bd68-account-create-n5nsg" Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.307650 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x995s\" (UniqueName: \"kubernetes.io/projected/e4b44163-e584-4c80-a5b6-2f8d6e3af4e5-kube-api-access-x995s\") pod \"cinder-bae9-account-create-spzmd\" (UID: \"e4b44163-e584-4c80-a5b6-2f8d6e3af4e5\") " pod="openstack/cinder-bae9-account-create-spzmd" Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.315805 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-bae9-account-create-spzmd" Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.362069 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bd68-account-create-n5nsg" Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.843200 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.843601 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.891125 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-bae9-account-create-spzmd"] Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.986937 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-dns-svc\") pod \"4f438e78-7ce7-482f-b0a4-8963181b7964\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.987062 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-config\") pod \"4f438e78-7ce7-482f-b0a4-8963181b7964\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.987245 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-ovsdbserver-sb\") pod \"4f438e78-7ce7-482f-b0a4-8963181b7964\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.987353 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-dns-swift-storage-0\") pod \"4f438e78-7ce7-482f-b0a4-8963181b7964\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.987387 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8lpw\" (UniqueName: \"kubernetes.io/projected/4f438e78-7ce7-482f-b0a4-8963181b7964-kube-api-access-h8lpw\") pod \"4f438e78-7ce7-482f-b0a4-8963181b7964\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " Oct 02 18:41:44 crc kubenswrapper[4832]: I1002 18:41:44.987432 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-ovsdbserver-nb\") pod \"4f438e78-7ce7-482f-b0a4-8963181b7964\" (UID: \"4f438e78-7ce7-482f-b0a4-8963181b7964\") " Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.011091 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f438e78-7ce7-482f-b0a4-8963181b7964-kube-api-access-h8lpw" (OuterVolumeSpecName: "kube-api-access-h8lpw") pod "4f438e78-7ce7-482f-b0a4-8963181b7964" (UID: "4f438e78-7ce7-482f-b0a4-8963181b7964"). InnerVolumeSpecName "kube-api-access-h8lpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.027872 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4f438e78-7ce7-482f-b0a4-8963181b7964" (UID: "4f438e78-7ce7-482f-b0a4-8963181b7964"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.033991 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f438e78-7ce7-482f-b0a4-8963181b7964" (UID: "4f438e78-7ce7-482f-b0a4-8963181b7964"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.060351 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4f438e78-7ce7-482f-b0a4-8963181b7964" (UID: "4f438e78-7ce7-482f-b0a4-8963181b7964"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.070673 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-config" (OuterVolumeSpecName: "config") pod "4f438e78-7ce7-482f-b0a4-8963181b7964" (UID: "4f438e78-7ce7-482f-b0a4-8963181b7964"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.093686 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8lpw\" (UniqueName: \"kubernetes.io/projected/4f438e78-7ce7-482f-b0a4-8963181b7964-kube-api-access-h8lpw\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.093725 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.093737 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.093750 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.093761 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.096208 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4f438e78-7ce7-482f-b0a4-8963181b7964" (UID: "4f438e78-7ce7-482f-b0a4-8963181b7964"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.146425 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-bae9-account-create-spzmd" event={"ID":"e4b44163-e584-4c80-a5b6-2f8d6e3af4e5","Type":"ContainerStarted","Data":"de1d38749156bdd343642a7ea710a8fe5f43e7eb4ae8fb3fd28efcc011c97dfa"} Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.149206 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" event={"ID":"4f438e78-7ce7-482f-b0a4-8963181b7964","Type":"ContainerDied","Data":"0b71d37ac121bd622a7b1e547a14a4e0fe4890eafafba56f9c71f6f99e9e9a1d"} Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.149248 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-6dmzd" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.149251 4832 scope.go:117] "RemoveContainer" containerID="682a57739d30f814be7a325cc0ce3139af51cc41f1a85f0d5c35a8dd755357e1" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.150890 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" event={"ID":"b5f9b41f-3101-4516-99bd-1612910e0e3c","Type":"ContainerStarted","Data":"ee794b95261858731463b6ca50aca7ff5e72fea32238539403421f9cff975e46"} Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.173720 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-bd68-account-create-n5nsg"] Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.195762 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f438e78-7ce7-482f-b0a4-8963181b7964-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.246333 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.286591 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" podStartSLOduration=5.286572019 podStartE2EDuration="5.286572019s" podCreationTimestamp="2025-10-02 18:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:41:45.181802229 +0000 UTC m=+1262.151245101" watchObservedRunningTime="2025-10-02 18:41:45.286572019 +0000 UTC m=+1262.256014901" Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.434492 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-6dmzd"] Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.455034 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-6dmzd"] Oct 02 18:41:45 crc kubenswrapper[4832]: I1002 18:41:45.845544 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:46 crc kubenswrapper[4832]: I1002 18:41:46.164842 4832 generic.go:334] "Generic (PLEG): container finished" podID="e4b44163-e584-4c80-a5b6-2f8d6e3af4e5" containerID="d5f7d739fa5a7208f82254b309da66fca2ac4bfec6cd0afd46f18f9f20eacaa5" exitCode=0 Oct 02 18:41:46 crc kubenswrapper[4832]: I1002 18:41:46.164897 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-bae9-account-create-spzmd" event={"ID":"e4b44163-e584-4c80-a5b6-2f8d6e3af4e5","Type":"ContainerDied","Data":"d5f7d739fa5a7208f82254b309da66fca2ac4bfec6cd0afd46f18f9f20eacaa5"} Oct 02 18:41:46 crc kubenswrapper[4832]: I1002 18:41:46.196764 4832 generic.go:334] "Generic (PLEG): container finished" podID="b99959e1-bdb1-4c91-9256-56fff1ac186b" containerID="c14bce0a9e02e0f85adc31c561faab72456f39786e56500c65f8b05062b1918b" exitCode=0 Oct 02 18:41:46 crc kubenswrapper[4832]: I1002 18:41:46.197108 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-bd68-account-create-n5nsg" event={"ID":"b99959e1-bdb1-4c91-9256-56fff1ac186b","Type":"ContainerDied","Data":"c14bce0a9e02e0f85adc31c561faab72456f39786e56500c65f8b05062b1918b"} Oct 02 18:41:46 crc kubenswrapper[4832]: I1002 18:41:46.197134 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-bd68-account-create-n5nsg" event={"ID":"b99959e1-bdb1-4c91-9256-56fff1ac186b","Type":"ContainerStarted","Data":"d2423722019de4f63e1636d6dd5a7ad53fd4ca7d7e365f64c9fe6bbc074e0070"} Oct 02 18:41:47 crc kubenswrapper[4832]: I1002 18:41:47.237400 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f438e78-7ce7-482f-b0a4-8963181b7964" path="/var/lib/kubelet/pods/4f438e78-7ce7-482f-b0a4-8963181b7964/volumes" Oct 02 18:41:48 crc kubenswrapper[4832]: I1002 18:41:48.223572 4832 generic.go:334] "Generic (PLEG): container finished" podID="33cc23a9-d9d1-4065-88d3-4450d246b3f6" containerID="02b35e9bd924e57a9787f479171ed3c55be531cce04acad93ea397e0ab01912e" exitCode=0 Oct 02 18:41:48 crc kubenswrapper[4832]: I1002 18:41:48.223640 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ztzdn" event={"ID":"33cc23a9-d9d1-4065-88d3-4450d246b3f6","Type":"ContainerDied","Data":"02b35e9bd924e57a9787f479171ed3c55be531cce04acad93ea397e0ab01912e"} Oct 02 18:41:49 crc kubenswrapper[4832]: I1002 18:41:49.409298 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-bae9-account-create-spzmd" Oct 02 18:41:49 crc kubenswrapper[4832]: I1002 18:41:49.419442 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bd68-account-create-n5nsg" Oct 02 18:41:49 crc kubenswrapper[4832]: I1002 18:41:49.508907 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26g2l\" (UniqueName: \"kubernetes.io/projected/b99959e1-bdb1-4c91-9256-56fff1ac186b-kube-api-access-26g2l\") pod \"b99959e1-bdb1-4c91-9256-56fff1ac186b\" (UID: \"b99959e1-bdb1-4c91-9256-56fff1ac186b\") " Oct 02 18:41:49 crc kubenswrapper[4832]: I1002 18:41:49.509485 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x995s\" (UniqueName: \"kubernetes.io/projected/e4b44163-e584-4c80-a5b6-2f8d6e3af4e5-kube-api-access-x995s\") pod \"e4b44163-e584-4c80-a5b6-2f8d6e3af4e5\" (UID: \"e4b44163-e584-4c80-a5b6-2f8d6e3af4e5\") " Oct 02 18:41:49 crc kubenswrapper[4832]: I1002 18:41:49.519399 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4b44163-e584-4c80-a5b6-2f8d6e3af4e5-kube-api-access-x995s" (OuterVolumeSpecName: "kube-api-access-x995s") pod "e4b44163-e584-4c80-a5b6-2f8d6e3af4e5" (UID: "e4b44163-e584-4c80-a5b6-2f8d6e3af4e5"). InnerVolumeSpecName "kube-api-access-x995s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:49 crc kubenswrapper[4832]: I1002 18:41:49.522464 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b99959e1-bdb1-4c91-9256-56fff1ac186b-kube-api-access-26g2l" (OuterVolumeSpecName: "kube-api-access-26g2l") pod "b99959e1-bdb1-4c91-9256-56fff1ac186b" (UID: "b99959e1-bdb1-4c91-9256-56fff1ac186b"). InnerVolumeSpecName "kube-api-access-26g2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:49 crc kubenswrapper[4832]: I1002 18:41:49.611689 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x995s\" (UniqueName: \"kubernetes.io/projected/e4b44163-e584-4c80-a5b6-2f8d6e3af4e5-kube-api-access-x995s\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:49 crc kubenswrapper[4832]: I1002 18:41:49.611875 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26g2l\" (UniqueName: \"kubernetes.io/projected/b99959e1-bdb1-4c91-9256-56fff1ac186b-kube-api-access-26g2l\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:50 crc kubenswrapper[4832]: I1002 18:41:50.244646 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bd68-account-create-n5nsg" Oct 02 18:41:50 crc kubenswrapper[4832]: I1002 18:41:50.244665 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-bd68-account-create-n5nsg" event={"ID":"b99959e1-bdb1-4c91-9256-56fff1ac186b","Type":"ContainerDied","Data":"d2423722019de4f63e1636d6dd5a7ad53fd4ca7d7e365f64c9fe6bbc074e0070"} Oct 02 18:41:50 crc kubenswrapper[4832]: I1002 18:41:50.244704 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2423722019de4f63e1636d6dd5a7ad53fd4ca7d7e365f64c9fe6bbc074e0070" Oct 02 18:41:50 crc kubenswrapper[4832]: I1002 18:41:50.249734 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-bae9-account-create-spzmd" event={"ID":"e4b44163-e584-4c80-a5b6-2f8d6e3af4e5","Type":"ContainerDied","Data":"de1d38749156bdd343642a7ea710a8fe5f43e7eb4ae8fb3fd28efcc011c97dfa"} Oct 02 18:41:50 crc kubenswrapper[4832]: I1002 18:41:50.249761 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de1d38749156bdd343642a7ea710a8fe5f43e7eb4ae8fb3fd28efcc011c97dfa" Oct 02 18:41:50 crc kubenswrapper[4832]: I1002 18:41:50.249830 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-bae9-account-create-spzmd" Oct 02 18:41:50 crc kubenswrapper[4832]: I1002 18:41:50.253959 4832 generic.go:334] "Generic (PLEG): container finished" podID="c83e9ef5-26f5-4ec5-b70c-c28549d863f6" containerID="358dbc395ecb88f1f3493a001f23f1722c287e588d7907136af4d48bd40aa9c0" exitCode=0 Oct 02 18:41:50 crc kubenswrapper[4832]: I1002 18:41:50.254004 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-665w5" event={"ID":"c83e9ef5-26f5-4ec5-b70c-c28549d863f6","Type":"ContainerDied","Data":"358dbc395ecb88f1f3493a001f23f1722c287e588d7907136af4d48bd40aa9c0"} Oct 02 18:41:50 crc kubenswrapper[4832]: I1002 18:41:50.846547 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:41:50 crc kubenswrapper[4832]: I1002 18:41:50.937648 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-bvkbq"] Oct 02 18:41:50 crc kubenswrapper[4832]: I1002 18:41:50.941502 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-bvkbq" podUID="d420d739-ca22-4260-b5ca-75b21d89248b" containerName="dnsmasq-dns" containerID="cri-o://e22e6bb8f21faaa9e5928a7da1d104f929bc1fa75a15302672b4a0eb4bca0e0e" gracePeriod=10 Oct 02 18:41:51 crc kubenswrapper[4832]: I1002 18:41:51.268959 4832 generic.go:334] "Generic (PLEG): container finished" podID="d420d739-ca22-4260-b5ca-75b21d89248b" containerID="e22e6bb8f21faaa9e5928a7da1d104f929bc1fa75a15302672b4a0eb4bca0e0e" exitCode=0 Oct 02 18:41:51 crc kubenswrapper[4832]: I1002 18:41:51.269085 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-bvkbq" event={"ID":"d420d739-ca22-4260-b5ca-75b21d89248b","Type":"ContainerDied","Data":"e22e6bb8f21faaa9e5928a7da1d104f929bc1fa75a15302672b4a0eb4bca0e0e"} Oct 02 18:41:52 crc kubenswrapper[4832]: I1002 18:41:52.690478 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-bvkbq" podUID="d420d739-ca22-4260-b5ca-75b21d89248b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: connect: connection refused" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.178307 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-5gtbq"] Oct 02 18:41:54 crc kubenswrapper[4832]: E1002 18:41:54.180176 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b44163-e584-4c80-a5b6-2f8d6e3af4e5" containerName="mariadb-account-create" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.180201 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b44163-e584-4c80-a5b6-2f8d6e3af4e5" containerName="mariadb-account-create" Oct 02 18:41:54 crc kubenswrapper[4832]: E1002 18:41:54.180310 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f438e78-7ce7-482f-b0a4-8963181b7964" containerName="init" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.180327 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f438e78-7ce7-482f-b0a4-8963181b7964" containerName="init" Oct 02 18:41:54 crc kubenswrapper[4832]: E1002 18:41:54.180343 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b99959e1-bdb1-4c91-9256-56fff1ac186b" containerName="mariadb-account-create" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.180351 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b99959e1-bdb1-4c91-9256-56fff1ac186b" containerName="mariadb-account-create" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.180689 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4b44163-e584-4c80-a5b6-2f8d6e3af4e5" containerName="mariadb-account-create" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.180713 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f438e78-7ce7-482f-b0a4-8963181b7964" containerName="init" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.180739 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b99959e1-bdb1-4c91-9256-56fff1ac186b" containerName="mariadb-account-create" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.181637 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.185076 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.185349 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bk8lf" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.185190 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.197779 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5gtbq"] Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.328634 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-config-data\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.328716 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-db-sync-config-data\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.328764 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f03462b3-a4a5-441c-93c5-1f0008d95f21-etc-machine-id\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.328937 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-scripts\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.328982 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrfs\" (UniqueName: \"kubernetes.io/projected/f03462b3-a4a5-441c-93c5-1f0008d95f21-kube-api-access-nbrfs\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.329134 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-combined-ca-bundle\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.358446 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-pwlwm"] Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.360239 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pwlwm" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.362057 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-h8fnf" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.362230 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.381881 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pwlwm"] Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.431296 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-config-data\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.431376 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-db-sync-config-data\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.431408 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f03462b3-a4a5-441c-93c5-1f0008d95f21-etc-machine-id\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.431485 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-scripts\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.431516 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbrfs\" (UniqueName: \"kubernetes.io/projected/f03462b3-a4a5-441c-93c5-1f0008d95f21-kube-api-access-nbrfs\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.431598 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-combined-ca-bundle\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.433148 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f03462b3-a4a5-441c-93c5-1f0008d95f21-etc-machine-id\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.437896 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-combined-ca-bundle\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.438306 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-scripts\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.451908 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-config-data\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.452240 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbrfs\" (UniqueName: \"kubernetes.io/projected/f03462b3-a4a5-441c-93c5-1f0008d95f21-kube-api-access-nbrfs\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.463061 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-db-sync-config-data\") pod \"cinder-db-sync-5gtbq\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.501982 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.534901 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4233545e-957d-4d27-b6b0-ac9825530a13-db-sync-config-data\") pod \"barbican-db-sync-pwlwm\" (UID: \"4233545e-957d-4d27-b6b0-ac9825530a13\") " pod="openstack/barbican-db-sync-pwlwm" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.534988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4233545e-957d-4d27-b6b0-ac9825530a13-combined-ca-bundle\") pod \"barbican-db-sync-pwlwm\" (UID: \"4233545e-957d-4d27-b6b0-ac9825530a13\") " pod="openstack/barbican-db-sync-pwlwm" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.535133 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm49d\" (UniqueName: \"kubernetes.io/projected/4233545e-957d-4d27-b6b0-ac9825530a13-kube-api-access-qm49d\") pod \"barbican-db-sync-pwlwm\" (UID: \"4233545e-957d-4d27-b6b0-ac9825530a13\") " pod="openstack/barbican-db-sync-pwlwm" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.637451 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4233545e-957d-4d27-b6b0-ac9825530a13-db-sync-config-data\") pod \"barbican-db-sync-pwlwm\" (UID: \"4233545e-957d-4d27-b6b0-ac9825530a13\") " pod="openstack/barbican-db-sync-pwlwm" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.637504 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4233545e-957d-4d27-b6b0-ac9825530a13-combined-ca-bundle\") pod \"barbican-db-sync-pwlwm\" (UID: \"4233545e-957d-4d27-b6b0-ac9825530a13\") " pod="openstack/barbican-db-sync-pwlwm" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.637576 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm49d\" (UniqueName: \"kubernetes.io/projected/4233545e-957d-4d27-b6b0-ac9825530a13-kube-api-access-qm49d\") pod \"barbican-db-sync-pwlwm\" (UID: \"4233545e-957d-4d27-b6b0-ac9825530a13\") " pod="openstack/barbican-db-sync-pwlwm" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.641110 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4233545e-957d-4d27-b6b0-ac9825530a13-combined-ca-bundle\") pod \"barbican-db-sync-pwlwm\" (UID: \"4233545e-957d-4d27-b6b0-ac9825530a13\") " pod="openstack/barbican-db-sync-pwlwm" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.641588 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4233545e-957d-4d27-b6b0-ac9825530a13-db-sync-config-data\") pod \"barbican-db-sync-pwlwm\" (UID: \"4233545e-957d-4d27-b6b0-ac9825530a13\") " pod="openstack/barbican-db-sync-pwlwm" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.659038 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm49d\" (UniqueName: \"kubernetes.io/projected/4233545e-957d-4d27-b6b0-ac9825530a13-kube-api-access-qm49d\") pod \"barbican-db-sync-pwlwm\" (UID: \"4233545e-957d-4d27-b6b0-ac9825530a13\") " pod="openstack/barbican-db-sync-pwlwm" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.696206 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pwlwm" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.844464 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:54 crc kubenswrapper[4832]: I1002 18:41:54.850204 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:55 crc kubenswrapper[4832]: I1002 18:41:55.321167 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Oct 02 18:41:56 crc kubenswrapper[4832]: I1002 18:41:56.878607 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:41:56 crc kubenswrapper[4832]: I1002 18:41:56.880019 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:41:56 crc kubenswrapper[4832]: I1002 18:41:56.880166 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:41:56 crc kubenswrapper[4832]: I1002 18:41:56.881499 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c688f74c22f81ea3d61106cf0c7f62698937fd7d0fad6673fa18a7fd31c7b079"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 18:41:56 crc kubenswrapper[4832]: I1002 18:41:56.881576 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://c688f74c22f81ea3d61106cf0c7f62698937fd7d0fad6673fa18a7fd31c7b079" gracePeriod=600 Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.350877 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="c688f74c22f81ea3d61106cf0c7f62698937fd7d0fad6673fa18a7fd31c7b079" exitCode=0 Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.350947 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"c688f74c22f81ea3d61106cf0c7f62698937fd7d0fad6673fa18a7fd31c7b079"} Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.350999 4832 scope.go:117] "RemoveContainer" containerID="65d3e2d93b30b70c1447cb55a9f5b7b0ff104d9e0a2d6e88b49ea2c6960bc4e2" Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.485855 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.625641 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-combined-ca-bundle\") pod \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.625821 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zbq5\" (UniqueName: \"kubernetes.io/projected/33cc23a9-d9d1-4065-88d3-4450d246b3f6-kube-api-access-7zbq5\") pod \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.625889 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-fernet-keys\") pod \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.625992 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-scripts\") pod \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.626158 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-config-data\") pod \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.626328 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-credential-keys\") pod \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\" (UID: \"33cc23a9-d9d1-4065-88d3-4450d246b3f6\") " Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.633488 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "33cc23a9-d9d1-4065-88d3-4450d246b3f6" (UID: "33cc23a9-d9d1-4065-88d3-4450d246b3f6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.633638 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33cc23a9-d9d1-4065-88d3-4450d246b3f6-kube-api-access-7zbq5" (OuterVolumeSpecName: "kube-api-access-7zbq5") pod "33cc23a9-d9d1-4065-88d3-4450d246b3f6" (UID: "33cc23a9-d9d1-4065-88d3-4450d246b3f6"). InnerVolumeSpecName "kube-api-access-7zbq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.634050 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-scripts" (OuterVolumeSpecName: "scripts") pod "33cc23a9-d9d1-4065-88d3-4450d246b3f6" (UID: "33cc23a9-d9d1-4065-88d3-4450d246b3f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.636651 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "33cc23a9-d9d1-4065-88d3-4450d246b3f6" (UID: "33cc23a9-d9d1-4065-88d3-4450d246b3f6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.663023 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33cc23a9-d9d1-4065-88d3-4450d246b3f6" (UID: "33cc23a9-d9d1-4065-88d3-4450d246b3f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.666305 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-config-data" (OuterVolumeSpecName: "config-data") pod "33cc23a9-d9d1-4065-88d3-4450d246b3f6" (UID: "33cc23a9-d9d1-4065-88d3-4450d246b3f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.731569 4832 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.731604 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.731614 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zbq5\" (UniqueName: \"kubernetes.io/projected/33cc23a9-d9d1-4065-88d3-4450d246b3f6-kube-api-access-7zbq5\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.731627 4832 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.731635 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:57 crc kubenswrapper[4832]: I1002 18:41:57.731643 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33cc23a9-d9d1-4065-88d3-4450d246b3f6-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.362958 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ztzdn" event={"ID":"33cc23a9-d9d1-4065-88d3-4450d246b3f6","Type":"ContainerDied","Data":"e6a6108d430f644ec4af09e59fcf041c50bf3a732f9389eebb5b3cfa4020de32"} Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.363455 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6a6108d430f644ec4af09e59fcf041c50bf3a732f9389eebb5b3cfa4020de32" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.363151 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ztzdn" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.603729 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ztzdn"] Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.620317 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ztzdn"] Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.687205 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hjtkl"] Oct 02 18:41:58 crc kubenswrapper[4832]: E1002 18:41:58.687967 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cc23a9-d9d1-4065-88d3-4450d246b3f6" containerName="keystone-bootstrap" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.687991 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cc23a9-d9d1-4065-88d3-4450d246b3f6" containerName="keystone-bootstrap" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.688412 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="33cc23a9-d9d1-4065-88d3-4450d246b3f6" containerName="keystone-bootstrap" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.689842 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.692538 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.692667 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.692679 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r8zxk" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.694363 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.707017 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hjtkl"] Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.855505 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-config-data\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.855745 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clhdb\" (UniqueName: \"kubernetes.io/projected/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-kube-api-access-clhdb\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.855913 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-scripts\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.856139 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-fernet-keys\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.856513 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-combined-ca-bundle\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.856614 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-credential-keys\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.959107 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-combined-ca-bundle\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.959217 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-credential-keys\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.959394 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-config-data\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.959479 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clhdb\" (UniqueName: \"kubernetes.io/projected/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-kube-api-access-clhdb\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.959529 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-scripts\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.959625 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-fernet-keys\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.965135 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-combined-ca-bundle\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.966595 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-credential-keys\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.968590 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-scripts\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.969848 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-fernet-keys\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.970892 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-config-data\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:58 crc kubenswrapper[4832]: I1002 18:41:58.993644 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clhdb\" (UniqueName: \"kubernetes.io/projected/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-kube-api-access-clhdb\") pod \"keystone-bootstrap-hjtkl\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.019036 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.256981 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33cc23a9-d9d1-4065-88d3-4450d246b3f6" path="/var/lib/kubelet/pods/33cc23a9-d9d1-4065-88d3-4450d246b3f6/volumes" Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.322534 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-665w5" Oct 02 18:41:59 crc kubenswrapper[4832]: E1002 18:41:59.371467 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Oct 02 18:41:59 crc kubenswrapper[4832]: E1002 18:41:59.371628 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b9drd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-dx58r_openstack(c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:41:59 crc kubenswrapper[4832]: E1002 18:41:59.372770 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-dx58r" podUID="c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4" Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.381762 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-665w5" event={"ID":"c83e9ef5-26f5-4ec5-b70c-c28549d863f6","Type":"ContainerDied","Data":"4b376820e34e47f0e4f57603a67e2d60758b878f970220fdd74ea7f628d77d5a"} Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.381811 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b376820e34e47f0e4f57603a67e2d60758b878f970220fdd74ea7f628d77d5a" Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.381823 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-665w5" Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.473146 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-combined-ca-bundle\") pod \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.473364 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-db-sync-config-data\") pod \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.473474 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldrnc\" (UniqueName: \"kubernetes.io/projected/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-kube-api-access-ldrnc\") pod \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.473522 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-config-data\") pod \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\" (UID: \"c83e9ef5-26f5-4ec5-b70c-c28549d863f6\") " Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.477844 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c83e9ef5-26f5-4ec5-b70c-c28549d863f6" (UID: "c83e9ef5-26f5-4ec5-b70c-c28549d863f6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.482544 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-kube-api-access-ldrnc" (OuterVolumeSpecName: "kube-api-access-ldrnc") pod "c83e9ef5-26f5-4ec5-b70c-c28549d863f6" (UID: "c83e9ef5-26f5-4ec5-b70c-c28549d863f6"). InnerVolumeSpecName "kube-api-access-ldrnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.505011 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c83e9ef5-26f5-4ec5-b70c-c28549d863f6" (UID: "c83e9ef5-26f5-4ec5-b70c-c28549d863f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.545179 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-config-data" (OuterVolumeSpecName: "config-data") pod "c83e9ef5-26f5-4ec5-b70c-c28549d863f6" (UID: "c83e9ef5-26f5-4ec5-b70c-c28549d863f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.575646 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.575684 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldrnc\" (UniqueName: \"kubernetes.io/projected/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-kube-api-access-ldrnc\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.575698 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.575711 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83e9ef5-26f5-4ec5-b70c-c28549d863f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:41:59 crc kubenswrapper[4832]: I1002 18:41:59.949700 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.097860 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-dns-svc\") pod \"d420d739-ca22-4260-b5ca-75b21d89248b\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.097929 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-config\") pod \"d420d739-ca22-4260-b5ca-75b21d89248b\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.097961 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-ovsdbserver-nb\") pod \"d420d739-ca22-4260-b5ca-75b21d89248b\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.097991 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-ovsdbserver-sb\") pod \"d420d739-ca22-4260-b5ca-75b21d89248b\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.098036 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbk9b\" (UniqueName: \"kubernetes.io/projected/d420d739-ca22-4260-b5ca-75b21d89248b-kube-api-access-zbk9b\") pod \"d420d739-ca22-4260-b5ca-75b21d89248b\" (UID: \"d420d739-ca22-4260-b5ca-75b21d89248b\") " Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.119752 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d420d739-ca22-4260-b5ca-75b21d89248b-kube-api-access-zbk9b" (OuterVolumeSpecName: "kube-api-access-zbk9b") pod "d420d739-ca22-4260-b5ca-75b21d89248b" (UID: "d420d739-ca22-4260-b5ca-75b21d89248b"). InnerVolumeSpecName "kube-api-access-zbk9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.173420 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d420d739-ca22-4260-b5ca-75b21d89248b" (UID: "d420d739-ca22-4260-b5ca-75b21d89248b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.174490 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-config" (OuterVolumeSpecName: "config") pod "d420d739-ca22-4260-b5ca-75b21d89248b" (UID: "d420d739-ca22-4260-b5ca-75b21d89248b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.194491 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d420d739-ca22-4260-b5ca-75b21d89248b" (UID: "d420d739-ca22-4260-b5ca-75b21d89248b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:00 crc kubenswrapper[4832]: E1002 18:42:00.202146 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Oct 02 18:42:00 crc kubenswrapper[4832]: E1002 18:42:00.202481 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d7h64bh5c8hc4hd6hfdhcchcfh78h6bhb8h9h58dh557h76h6chd6h699h54bh5b4h5bchbh9bh5d8h98h64bh69h5dbh67bhc9h5d8h656q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x64sv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(c9f3139b-0c15-4734-8b9c-d753cf1f2cb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.203467 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbk9b\" (UniqueName: \"kubernetes.io/projected/d420d739-ca22-4260-b5ca-75b21d89248b-kube-api-access-zbk9b\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.203489 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.203498 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.203506 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.208668 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d420d739-ca22-4260-b5ca-75b21d89248b" (UID: "d420d739-ca22-4260-b5ca-75b21d89248b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.305219 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d420d739-ca22-4260-b5ca-75b21d89248b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.402481 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-bvkbq" event={"ID":"d420d739-ca22-4260-b5ca-75b21d89248b","Type":"ContainerDied","Data":"124f308c3aef9ed48d124866e5d2c3f86f094d0962444fae043c905882c2e0f6"} Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.402509 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-bvkbq" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.402545 4832 scope.go:117] "RemoveContainer" containerID="e22e6bb8f21faaa9e5928a7da1d104f929bc1fa75a15302672b4a0eb4bca0e0e" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.441759 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-bvkbq"] Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.444038 4832 scope.go:117] "RemoveContainer" containerID="1f9a095214cd1030e8ae9292f31240f9e6e829633daca43acb7e8bc9c602ae01" Oct 02 18:42:00 crc kubenswrapper[4832]: E1002 18:42:00.444055 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-dx58r" podUID="c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.453100 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-bvkbq"] Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.506240 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pwlwm"] Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.740666 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5gtbq"] Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.803345 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-b94z6"] Oct 02 18:42:00 crc kubenswrapper[4832]: E1002 18:42:00.803781 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83e9ef5-26f5-4ec5-b70c-c28549d863f6" containerName="glance-db-sync" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.803792 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83e9ef5-26f5-4ec5-b70c-c28549d863f6" containerName="glance-db-sync" Oct 02 18:42:00 crc kubenswrapper[4832]: E1002 18:42:00.803815 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d420d739-ca22-4260-b5ca-75b21d89248b" containerName="dnsmasq-dns" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.803822 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d420d739-ca22-4260-b5ca-75b21d89248b" containerName="dnsmasq-dns" Oct 02 18:42:00 crc kubenswrapper[4832]: E1002 18:42:00.803853 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d420d739-ca22-4260-b5ca-75b21d89248b" containerName="init" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.803859 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d420d739-ca22-4260-b5ca-75b21d89248b" containerName="init" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.804051 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83e9ef5-26f5-4ec5-b70c-c28549d863f6" containerName="glance-db-sync" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.804066 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d420d739-ca22-4260-b5ca-75b21d89248b" containerName="dnsmasq-dns" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.805161 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.819846 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-b94z6"] Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.874284 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hjtkl"] Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.917461 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jstrh\" (UniqueName: \"kubernetes.io/projected/9c4ee378-c41a-4461-91ce-8de208177861-kube-api-access-jstrh\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.917786 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.917840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.917878 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.917920 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:00 crc kubenswrapper[4832]: I1002 18:42:00.917949 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-config\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.019416 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jstrh\" (UniqueName: \"kubernetes.io/projected/9c4ee378-c41a-4461-91ce-8de208177861-kube-api-access-jstrh\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.019477 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.019524 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.019552 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.019582 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.019599 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-config\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.020433 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-config\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.021059 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.021100 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.021667 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.021955 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.039280 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jstrh\" (UniqueName: \"kubernetes.io/projected/9c4ee378-c41a-4461-91ce-8de208177861-kube-api-access-jstrh\") pod \"dnsmasq-dns-785d8bcb8c-b94z6\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.127517 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.258932 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d420d739-ca22-4260-b5ca-75b21d89248b" path="/var/lib/kubelet/pods/d420d739-ca22-4260-b5ca-75b21d89248b/volumes" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.421496 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5gtbq" event={"ID":"f03462b3-a4a5-441c-93c5-1f0008d95f21","Type":"ContainerStarted","Data":"49430731befb9c95fc765b363d76a4c30149924a601436b78125173fdbf7d8fe"} Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.423046 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pwlwm" event={"ID":"4233545e-957d-4d27-b6b0-ac9825530a13","Type":"ContainerStarted","Data":"e87246ecb4c4ed6bf1c5c67dcdacb6364cf2ae890ccb50007cf1c3aee1b66e21"} Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.424841 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lrhfz" event={"ID":"78fb0cc0-e570-437c-b527-c925ff84070a","Type":"ContainerStarted","Data":"e06bd3a855fd570d1520e686e9546b597caeb0e3007a06149252be4e7b1369b2"} Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.429300 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjtkl" event={"ID":"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5","Type":"ContainerStarted","Data":"46b679f538c188670cdda0092f087b52c3b05b73ce5adaa8cc17f790bd4c8a26"} Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.429380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjtkl" event={"ID":"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5","Type":"ContainerStarted","Data":"ea9e6b611e2f79671102afe2112dff00a0ffc03e7bed9ff5065d7f82c802785d"} Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.435285 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"d11c7dd5e816b980d09e31f34fc920edcbd862f94c306350838f9fcadaa3f9f6"} Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.467709 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-lrhfz" podStartSLOduration=4.773147359 podStartE2EDuration="21.46768461s" podCreationTimestamp="2025-10-02 18:41:40 +0000 UTC" firstStartedPulling="2025-10-02 18:41:43.244684303 +0000 UTC m=+1260.214127175" lastFinishedPulling="2025-10-02 18:41:59.939221554 +0000 UTC m=+1276.908664426" observedRunningTime="2025-10-02 18:42:01.443706911 +0000 UTC m=+1278.413149793" watchObservedRunningTime="2025-10-02 18:42:01.46768461 +0000 UTC m=+1278.437127472" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.519927 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hjtkl" podStartSLOduration=3.51990332 podStartE2EDuration="3.51990332s" podCreationTimestamp="2025-10-02 18:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:01.501485006 +0000 UTC m=+1278.470927887" watchObservedRunningTime="2025-10-02 18:42:01.51990332 +0000 UTC m=+1278.489346192" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.640458 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-b94z6"] Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.701100 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.703662 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.706392 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.706802 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rvnpc" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.706954 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.736460 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.837990 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-logs\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.838070 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.838115 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24w48\" (UniqueName: \"kubernetes.io/projected/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-kube-api-access-24w48\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.838515 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.841253 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.841465 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.841529 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.943604 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.945555 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.946031 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-logs\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.946082 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.946109 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24w48\" (UniqueName: \"kubernetes.io/projected/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-kube-api-access-24w48\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.946157 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.946236 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.946257 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.946303 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.946596 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.946652 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-logs\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.949497 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.952454 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.956469 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.956985 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.962905 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.968061 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24w48\" (UniqueName: \"kubernetes.io/projected/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-kube-api-access-24w48\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:01 crc kubenswrapper[4832]: I1002 18:42:01.998343 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.015966 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.045474 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.048958 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.049030 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.049050 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blb7n\" (UniqueName: \"kubernetes.io/projected/af7f9ee8-d267-4aec-b95e-6ecfea75264d-kube-api-access-blb7n\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.049091 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.049134 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.049172 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af7f9ee8-d267-4aec-b95e-6ecfea75264d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.049197 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af7f9ee8-d267-4aec-b95e-6ecfea75264d-logs\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.151159 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.151876 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.151917 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blb7n\" (UniqueName: \"kubernetes.io/projected/af7f9ee8-d267-4aec-b95e-6ecfea75264d-kube-api-access-blb7n\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.151972 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.152029 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.152087 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af7f9ee8-d267-4aec-b95e-6ecfea75264d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.152128 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af7f9ee8-d267-4aec-b95e-6ecfea75264d-logs\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.152835 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af7f9ee8-d267-4aec-b95e-6ecfea75264d-logs\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.153218 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.153826 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af7f9ee8-d267-4aec-b95e-6ecfea75264d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.161291 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.161908 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.164142 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.168220 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blb7n\" (UniqueName: \"kubernetes.io/projected/af7f9ee8-d267-4aec-b95e-6ecfea75264d-kube-api-access-blb7n\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.265835 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.462029 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" event={"ID":"9c4ee378-c41a-4461-91ce-8de208177861","Type":"ContainerStarted","Data":"578ab805e530863ed65b5a7dbff7286041745f433af17d0d2655c407bb4f694b"} Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.462138 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" event={"ID":"9c4ee378-c41a-4461-91ce-8de208177861","Type":"ContainerStarted","Data":"5e547f5a24aa928db87e51555c2a37413dc9c9d135b8296dbc50ace8141954fc"} Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.557029 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.690939 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-bvkbq" podUID="d420d739-ca22-4260-b5ca-75b21d89248b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Oct 02 18:42:02 crc kubenswrapper[4832]: I1002 18:42:02.703810 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:42:03 crc kubenswrapper[4832]: I1002 18:42:03.480820 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c4ee378-c41a-4461-91ce-8de208177861" containerID="578ab805e530863ed65b5a7dbff7286041745f433af17d0d2655c407bb4f694b" exitCode=0 Oct 02 18:42:03 crc kubenswrapper[4832]: I1002 18:42:03.481029 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" event={"ID":"9c4ee378-c41a-4461-91ce-8de208177861","Type":"ContainerDied","Data":"578ab805e530863ed65b5a7dbff7286041745f433af17d0d2655c407bb4f694b"} Oct 02 18:42:03 crc kubenswrapper[4832]: I1002 18:42:03.628638 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:42:03 crc kubenswrapper[4832]: I1002 18:42:03.686866 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:42:04 crc kubenswrapper[4832]: I1002 18:42:04.496413 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7","Type":"ContainerStarted","Data":"4c4bce92e8e1e492b20cdf31ac644c96443e51917d427a9f344f122f8343f77c"} Oct 02 18:42:04 crc kubenswrapper[4832]: I1002 18:42:04.928531 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:42:04 crc kubenswrapper[4832]: W1002 18:42:04.931328 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf7f9ee8_d267_4aec_b95e_6ecfea75264d.slice/crio-4e09d8563757babc692f08d8025ec86e575f7f9e4b64158fc9e3f0b6a5d727d5 WatchSource:0}: Error finding container 4e09d8563757babc692f08d8025ec86e575f7f9e4b64158fc9e3f0b6a5d727d5: Status 404 returned error can't find the container with id 4e09d8563757babc692f08d8025ec86e575f7f9e4b64158fc9e3f0b6a5d727d5 Oct 02 18:42:05 crc kubenswrapper[4832]: I1002 18:42:05.508839 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5","Type":"ContainerStarted","Data":"de6d71fd90e233d95c8cb7f239d8e2f73d0800a7aac63f8faad9a3e965ce52c3"} Oct 02 18:42:05 crc kubenswrapper[4832]: I1002 18:42:05.510636 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" event={"ID":"9c4ee378-c41a-4461-91ce-8de208177861","Type":"ContainerStarted","Data":"91fb91a489427342718fc0e12aa9a981fafe0ace647fa95011bf4ba4f07c63d9"} Oct 02 18:42:05 crc kubenswrapper[4832]: I1002 18:42:05.511082 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:05 crc kubenswrapper[4832]: I1002 18:42:05.522548 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7","Type":"ContainerStarted","Data":"6ba5b0835f7f50a4495a70f5bb9c1bf87745d2a097317fe4cb779d759db8096d"} Oct 02 18:42:05 crc kubenswrapper[4832]: I1002 18:42:05.524571 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"af7f9ee8-d267-4aec-b95e-6ecfea75264d","Type":"ContainerStarted","Data":"4e09d8563757babc692f08d8025ec86e575f7f9e4b64158fc9e3f0b6a5d727d5"} Oct 02 18:42:05 crc kubenswrapper[4832]: I1002 18:42:05.543211 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" podStartSLOduration=5.543193364 podStartE2EDuration="5.543193364s" podCreationTimestamp="2025-10-02 18:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:05.540793709 +0000 UTC m=+1282.510236581" watchObservedRunningTime="2025-10-02 18:42:05.543193364 +0000 UTC m=+1282.512636236" Oct 02 18:42:06 crc kubenswrapper[4832]: I1002 18:42:06.536095 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"af7f9ee8-d267-4aec-b95e-6ecfea75264d","Type":"ContainerStarted","Data":"9dd193bd3208ab3edb1ce07a65abb390d1611f19a5613b580a1b8d1721ff8712"} Oct 02 18:42:07 crc kubenswrapper[4832]: I1002 18:42:07.548632 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjtkl" event={"ID":"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5","Type":"ContainerDied","Data":"46b679f538c188670cdda0092f087b52c3b05b73ce5adaa8cc17f790bd4c8a26"} Oct 02 18:42:07 crc kubenswrapper[4832]: I1002 18:42:07.548580 4832 generic.go:334] "Generic (PLEG): container finished" podID="4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5" containerID="46b679f538c188670cdda0092f087b52c3b05b73ce5adaa8cc17f790bd4c8a26" exitCode=0 Oct 02 18:42:08 crc kubenswrapper[4832]: I1002 18:42:08.568980 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pwlwm" event={"ID":"4233545e-957d-4d27-b6b0-ac9825530a13","Type":"ContainerStarted","Data":"44f55dcef8c69307c8ecf514e7467393f40b59cc988f5fb1e6238e6581f8f1f6"} Oct 02 18:42:08 crc kubenswrapper[4832]: I1002 18:42:08.576072 4832 generic.go:334] "Generic (PLEG): container finished" podID="78fb0cc0-e570-437c-b527-c925ff84070a" containerID="e06bd3a855fd570d1520e686e9546b597caeb0e3007a06149252be4e7b1369b2" exitCode=0 Oct 02 18:42:08 crc kubenswrapper[4832]: I1002 18:42:08.576251 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lrhfz" event={"ID":"78fb0cc0-e570-437c-b527-c925ff84070a","Type":"ContainerDied","Data":"e06bd3a855fd570d1520e686e9546b597caeb0e3007a06149252be4e7b1369b2"} Oct 02 18:42:08 crc kubenswrapper[4832]: I1002 18:42:08.603531 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-pwlwm" podStartSLOduration=6.867341858 podStartE2EDuration="14.603507275s" podCreationTimestamp="2025-10-02 18:41:54 +0000 UTC" firstStartedPulling="2025-10-02 18:42:00.509249969 +0000 UTC m=+1277.478692841" lastFinishedPulling="2025-10-02 18:42:08.245415386 +0000 UTC m=+1285.214858258" observedRunningTime="2025-10-02 18:42:08.59823774 +0000 UTC m=+1285.567680622" watchObservedRunningTime="2025-10-02 18:42:08.603507275 +0000 UTC m=+1285.572950147" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.195025 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.275755 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-combined-ca-bundle\") pod \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.275918 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-scripts\") pod \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.275977 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-credential-keys\") pod \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.276125 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-fernet-keys\") pod \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.276169 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clhdb\" (UniqueName: \"kubernetes.io/projected/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-kube-api-access-clhdb\") pod \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.276208 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-config-data\") pod \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\" (UID: \"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5\") " Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.290918 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5" (UID: "4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.292709 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-kube-api-access-clhdb" (OuterVolumeSpecName: "kube-api-access-clhdb") pod "4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5" (UID: "4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5"). InnerVolumeSpecName "kube-api-access-clhdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.317683 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5" (UID: "4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.339424 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-scripts" (OuterVolumeSpecName: "scripts") pod "4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5" (UID: "4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.339932 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-config-data" (OuterVolumeSpecName: "config-data") pod "4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5" (UID: "4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.367273 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5" (UID: "4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.382327 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.382654 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.382671 4832 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.382704 4832 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.382720 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clhdb\" (UniqueName: \"kubernetes.io/projected/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-kube-api-access-clhdb\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.382739 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.595838 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"af7f9ee8-d267-4aec-b95e-6ecfea75264d","Type":"ContainerStarted","Data":"3838a661e97d7f8bdd4a93f0b9a1965b25d9c8c9500447667442368401678416"} Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.596114 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="af7f9ee8-d267-4aec-b95e-6ecfea75264d" containerName="glance-log" containerID="cri-o://9dd193bd3208ab3edb1ce07a65abb390d1611f19a5613b580a1b8d1721ff8712" gracePeriod=30 Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.596718 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="af7f9ee8-d267-4aec-b95e-6ecfea75264d" containerName="glance-httpd" containerID="cri-o://3838a661e97d7f8bdd4a93f0b9a1965b25d9c8c9500447667442368401678416" gracePeriod=30 Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.599613 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjtkl" event={"ID":"4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5","Type":"ContainerDied","Data":"ea9e6b611e2f79671102afe2112dff00a0ffc03e7bed9ff5065d7f82c802785d"} Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.599631 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjtkl" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.599648 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea9e6b611e2f79671102afe2112dff00a0ffc03e7bed9ff5065d7f82c802785d" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.607940 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7","Type":"ContainerStarted","Data":"c61c30ecff5b30226c9b1ae78c48dc669d21fd96ca080b25d55d1f781470a3c2"} Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.608292 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" containerName="glance-log" containerID="cri-o://6ba5b0835f7f50a4495a70f5bb9c1bf87745d2a097317fe4cb779d759db8096d" gracePeriod=30 Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.608992 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" containerName="glance-httpd" containerID="cri-o://c61c30ecff5b30226c9b1ae78c48dc669d21fd96ca080b25d55d1f781470a3c2" gracePeriod=30 Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.648344 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.648322153 podStartE2EDuration="9.648322153s" podCreationTimestamp="2025-10-02 18:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:09.622315061 +0000 UTC m=+1286.591757933" watchObservedRunningTime="2025-10-02 18:42:09.648322153 +0000 UTC m=+1286.617765025" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.662536 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.662519286 podStartE2EDuration="9.662519286s" podCreationTimestamp="2025-10-02 18:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:09.656355914 +0000 UTC m=+1286.625798796" watchObservedRunningTime="2025-10-02 18:42:09.662519286 +0000 UTC m=+1286.631962158" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.787848 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-54ddcb9945-p7pkt"] Oct 02 18:42:09 crc kubenswrapper[4832]: E1002 18:42:09.788508 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5" containerName="keystone-bootstrap" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.788526 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5" containerName="keystone-bootstrap" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.788729 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5" containerName="keystone-bootstrap" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.790522 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.794308 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.794741 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.794866 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.795027 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.795162 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-r8zxk" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.795285 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.818511 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54ddcb9945-p7pkt"] Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.892617 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-combined-ca-bundle\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.892665 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-public-tls-certs\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.892700 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-config-data\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.892725 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-scripts\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.892782 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-credential-keys\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.892799 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-fernet-keys\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.892822 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd5zf\" (UniqueName: \"kubernetes.io/projected/e632994f-7397-4c6f-950a-bcdff946d4e2-kube-api-access-rd5zf\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.892886 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-internal-tls-certs\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.994821 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-internal-tls-certs\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.994944 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-combined-ca-bundle\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.994976 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-public-tls-certs\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.995017 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-config-data\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.995050 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-scripts\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.995124 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-credential-keys\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.995165 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-fernet-keys\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:09 crc kubenswrapper[4832]: I1002 18:42:09.995196 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd5zf\" (UniqueName: \"kubernetes.io/projected/e632994f-7397-4c6f-950a-bcdff946d4e2-kube-api-access-rd5zf\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.005226 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-fernet-keys\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.006097 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-scripts\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.006117 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-combined-ca-bundle\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.006540 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-internal-tls-certs\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.006559 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-credential-keys\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.007579 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-public-tls-certs\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.019657 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd5zf\" (UniqueName: \"kubernetes.io/projected/e632994f-7397-4c6f-950a-bcdff946d4e2-kube-api-access-rd5zf\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.022068 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e632994f-7397-4c6f-950a-bcdff946d4e2-config-data\") pod \"keystone-54ddcb9945-p7pkt\" (UID: \"e632994f-7397-4c6f-950a-bcdff946d4e2\") " pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.109469 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.337405 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lrhfz" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.411327 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fb0cc0-e570-437c-b527-c925ff84070a-logs\") pod \"78fb0cc0-e570-437c-b527-c925ff84070a\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.411414 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfx7h\" (UniqueName: \"kubernetes.io/projected/78fb0cc0-e570-437c-b527-c925ff84070a-kube-api-access-pfx7h\") pod \"78fb0cc0-e570-437c-b527-c925ff84070a\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.411476 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-scripts\") pod \"78fb0cc0-e570-437c-b527-c925ff84070a\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.411515 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-combined-ca-bundle\") pod \"78fb0cc0-e570-437c-b527-c925ff84070a\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.411707 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-config-data\") pod \"78fb0cc0-e570-437c-b527-c925ff84070a\" (UID: \"78fb0cc0-e570-437c-b527-c925ff84070a\") " Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.412406 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78fb0cc0-e570-437c-b527-c925ff84070a-logs" (OuterVolumeSpecName: "logs") pod "78fb0cc0-e570-437c-b527-c925ff84070a" (UID: "78fb0cc0-e570-437c-b527-c925ff84070a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.418184 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-scripts" (OuterVolumeSpecName: "scripts") pod "78fb0cc0-e570-437c-b527-c925ff84070a" (UID: "78fb0cc0-e570-437c-b527-c925ff84070a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.418765 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78fb0cc0-e570-437c-b527-c925ff84070a-kube-api-access-pfx7h" (OuterVolumeSpecName: "kube-api-access-pfx7h") pod "78fb0cc0-e570-437c-b527-c925ff84070a" (UID: "78fb0cc0-e570-437c-b527-c925ff84070a"). InnerVolumeSpecName "kube-api-access-pfx7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.458820 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78fb0cc0-e570-437c-b527-c925ff84070a" (UID: "78fb0cc0-e570-437c-b527-c925ff84070a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.460951 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-config-data" (OuterVolumeSpecName: "config-data") pod "78fb0cc0-e570-437c-b527-c925ff84070a" (UID: "78fb0cc0-e570-437c-b527-c925ff84070a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.515742 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.515789 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78fb0cc0-e570-437c-b527-c925ff84070a-logs\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.515802 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfx7h\" (UniqueName: \"kubernetes.io/projected/78fb0cc0-e570-437c-b527-c925ff84070a-kube-api-access-pfx7h\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.515815 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.515826 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78fb0cc0-e570-437c-b527-c925ff84070a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.639132 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-lrhfz" event={"ID":"78fb0cc0-e570-437c-b527-c925ff84070a","Type":"ContainerDied","Data":"05a8db1ab65fbc10b777268adabad87957adea27f723d8b823a98977fd70d123"} Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.639166 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05a8db1ab65fbc10b777268adabad87957adea27f723d8b823a98977fd70d123" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.639141 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-lrhfz" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.642921 4832 generic.go:334] "Generic (PLEG): container finished" podID="af7f9ee8-d267-4aec-b95e-6ecfea75264d" containerID="3838a661e97d7f8bdd4a93f0b9a1965b25d9c8c9500447667442368401678416" exitCode=0 Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.642941 4832 generic.go:334] "Generic (PLEG): container finished" podID="af7f9ee8-d267-4aec-b95e-6ecfea75264d" containerID="9dd193bd3208ab3edb1ce07a65abb390d1611f19a5613b580a1b8d1721ff8712" exitCode=143 Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.642972 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"af7f9ee8-d267-4aec-b95e-6ecfea75264d","Type":"ContainerDied","Data":"3838a661e97d7f8bdd4a93f0b9a1965b25d9c8c9500447667442368401678416"} Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.642989 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"af7f9ee8-d267-4aec-b95e-6ecfea75264d","Type":"ContainerDied","Data":"9dd193bd3208ab3edb1ce07a65abb390d1611f19a5613b580a1b8d1721ff8712"} Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.661189 4832 generic.go:334] "Generic (PLEG): container finished" podID="bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" containerID="c61c30ecff5b30226c9b1ae78c48dc669d21fd96ca080b25d55d1f781470a3c2" exitCode=0 Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.661217 4832 generic.go:334] "Generic (PLEG): container finished" podID="bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" containerID="6ba5b0835f7f50a4495a70f5bb9c1bf87745d2a097317fe4cb779d759db8096d" exitCode=143 Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.661237 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7","Type":"ContainerDied","Data":"c61c30ecff5b30226c9b1ae78c48dc669d21fd96ca080b25d55d1f781470a3c2"} Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.661281 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7","Type":"ContainerDied","Data":"6ba5b0835f7f50a4495a70f5bb9c1bf87745d2a097317fe4cb779d759db8096d"} Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.759802 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7975695b86-g5x7n"] Oct 02 18:42:10 crc kubenswrapper[4832]: E1002 18:42:10.760673 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fb0cc0-e570-437c-b527-c925ff84070a" containerName="placement-db-sync" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.760695 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fb0cc0-e570-437c-b527-c925ff84070a" containerName="placement-db-sync" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.764349 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="78fb0cc0-e570-437c-b527-c925ff84070a" containerName="placement-db-sync" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.767810 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.770718 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9vvrr" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.770955 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.771063 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.771162 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.771310 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.800825 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7975695b86-g5x7n"] Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.828414 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-public-tls-certs\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.828782 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67fzr\" (UniqueName: \"kubernetes.io/projected/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-kube-api-access-67fzr\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.828835 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-config-data\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.829367 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-internal-tls-certs\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.829949 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-scripts\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.830096 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-logs\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.830204 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-combined-ca-bundle\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.933597 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-public-tls-certs\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.934473 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67fzr\" (UniqueName: \"kubernetes.io/projected/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-kube-api-access-67fzr\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.934513 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-config-data\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.934556 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-internal-tls-certs\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.934586 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-scripts\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.934626 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-logs\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.934648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-combined-ca-bundle\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.936454 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-logs\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.945259 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-combined-ca-bundle\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.948605 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-public-tls-certs\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.948691 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-config-data\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.952032 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-scripts\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.954096 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67fzr\" (UniqueName: \"kubernetes.io/projected/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-kube-api-access-67fzr\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.963705 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54ddcb9945-p7pkt"] Oct 02 18:42:10 crc kubenswrapper[4832]: I1002 18:42:10.963767 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f-internal-tls-certs\") pod \"placement-7975695b86-g5x7n\" (UID: \"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f\") " pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:11 crc kubenswrapper[4832]: I1002 18:42:11.104581 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:11 crc kubenswrapper[4832]: I1002 18:42:11.130390 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:11 crc kubenswrapper[4832]: I1002 18:42:11.293715 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-xft64"] Oct 02 18:42:11 crc kubenswrapper[4832]: I1002 18:42:11.293982 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" podUID="b5f9b41f-3101-4516-99bd-1612910e0e3c" containerName="dnsmasq-dns" containerID="cri-o://ee794b95261858731463b6ca50aca7ff5e72fea32238539403421f9cff975e46" gracePeriod=10 Oct 02 18:42:11 crc kubenswrapper[4832]: I1002 18:42:11.675103 4832 generic.go:334] "Generic (PLEG): container finished" podID="b5f9b41f-3101-4516-99bd-1612910e0e3c" containerID="ee794b95261858731463b6ca50aca7ff5e72fea32238539403421f9cff975e46" exitCode=0 Oct 02 18:42:11 crc kubenswrapper[4832]: I1002 18:42:11.675465 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" event={"ID":"b5f9b41f-3101-4516-99bd-1612910e0e3c","Type":"ContainerDied","Data":"ee794b95261858731463b6ca50aca7ff5e72fea32238539403421f9cff975e46"} Oct 02 18:42:12 crc kubenswrapper[4832]: I1002 18:42:12.691512 4832 generic.go:334] "Generic (PLEG): container finished" podID="4233545e-957d-4d27-b6b0-ac9825530a13" containerID="44f55dcef8c69307c8ecf514e7467393f40b59cc988f5fb1e6238e6581f8f1f6" exitCode=0 Oct 02 18:42:12 crc kubenswrapper[4832]: I1002 18:42:12.691669 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pwlwm" event={"ID":"4233545e-957d-4d27-b6b0-ac9825530a13","Type":"ContainerDied","Data":"44f55dcef8c69307c8ecf514e7467393f40b59cc988f5fb1e6238e6581f8f1f6"} Oct 02 18:42:15 crc kubenswrapper[4832]: I1002 18:42:15.846321 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" podUID="b5f9b41f-3101-4516-99bd-1612910e0e3c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.178:5353: connect: connection refused" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.509156 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.516226 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.536784 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pwlwm" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672319 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af7f9ee8-d267-4aec-b95e-6ecfea75264d-logs\") pod \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672400 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-combined-ca-bundle\") pod \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672479 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af7f9ee8-d267-4aec-b95e-6ecfea75264d-httpd-run\") pod \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672521 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm49d\" (UniqueName: \"kubernetes.io/projected/4233545e-957d-4d27-b6b0-ac9825530a13-kube-api-access-qm49d\") pod \"4233545e-957d-4d27-b6b0-ac9825530a13\" (UID: \"4233545e-957d-4d27-b6b0-ac9825530a13\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672555 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-httpd-run\") pod \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672586 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672647 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-combined-ca-bundle\") pod \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672719 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4233545e-957d-4d27-b6b0-ac9825530a13-combined-ca-bundle\") pod \"4233545e-957d-4d27-b6b0-ac9825530a13\" (UID: \"4233545e-957d-4d27-b6b0-ac9825530a13\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672740 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blb7n\" (UniqueName: \"kubernetes.io/projected/af7f9ee8-d267-4aec-b95e-6ecfea75264d-kube-api-access-blb7n\") pod \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672803 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-config-data\") pod \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672823 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672851 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-logs\") pod \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672864 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-config-data\") pod \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672879 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4233545e-957d-4d27-b6b0-ac9825530a13-db-sync-config-data\") pod \"4233545e-957d-4d27-b6b0-ac9825530a13\" (UID: \"4233545e-957d-4d27-b6b0-ac9825530a13\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672902 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-scripts\") pod \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\" (UID: \"af7f9ee8-d267-4aec-b95e-6ecfea75264d\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672973 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24w48\" (UniqueName: \"kubernetes.io/projected/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-kube-api-access-24w48\") pod \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.672988 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-scripts\") pod \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\" (UID: \"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7\") " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.674378 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-logs" (OuterVolumeSpecName: "logs") pod "bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" (UID: "bc3fd122-9e81-4a61-b8c7-c955fbbc23c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.678983 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af7f9ee8-d267-4aec-b95e-6ecfea75264d-kube-api-access-blb7n" (OuterVolumeSpecName: "kube-api-access-blb7n") pod "af7f9ee8-d267-4aec-b95e-6ecfea75264d" (UID: "af7f9ee8-d267-4aec-b95e-6ecfea75264d"). InnerVolumeSpecName "kube-api-access-blb7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.680548 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-scripts" (OuterVolumeSpecName: "scripts") pod "bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" (UID: "bc3fd122-9e81-4a61-b8c7-c955fbbc23c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.682822 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af7f9ee8-d267-4aec-b95e-6ecfea75264d-logs" (OuterVolumeSpecName: "logs") pod "af7f9ee8-d267-4aec-b95e-6ecfea75264d" (UID: "af7f9ee8-d267-4aec-b95e-6ecfea75264d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.683979 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af7f9ee8-d267-4aec-b95e-6ecfea75264d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "af7f9ee8-d267-4aec-b95e-6ecfea75264d" (UID: "af7f9ee8-d267-4aec-b95e-6ecfea75264d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.684277 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" (UID: "bc3fd122-9e81-4a61-b8c7-c955fbbc23c7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.689340 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "af7f9ee8-d267-4aec-b95e-6ecfea75264d" (UID: "af7f9ee8-d267-4aec-b95e-6ecfea75264d"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.689377 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-kube-api-access-24w48" (OuterVolumeSpecName: "kube-api-access-24w48") pod "bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" (UID: "bc3fd122-9e81-4a61-b8c7-c955fbbc23c7"). InnerVolumeSpecName "kube-api-access-24w48". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.690077 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4233545e-957d-4d27-b6b0-ac9825530a13-kube-api-access-qm49d" (OuterVolumeSpecName: "kube-api-access-qm49d") pod "4233545e-957d-4d27-b6b0-ac9825530a13" (UID: "4233545e-957d-4d27-b6b0-ac9825530a13"). InnerVolumeSpecName "kube-api-access-qm49d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.695166 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-scripts" (OuterVolumeSpecName: "scripts") pod "af7f9ee8-d267-4aec-b95e-6ecfea75264d" (UID: "af7f9ee8-d267-4aec-b95e-6ecfea75264d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.695884 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" (UID: "bc3fd122-9e81-4a61-b8c7-c955fbbc23c7"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.708785 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4233545e-957d-4d27-b6b0-ac9825530a13-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4233545e-957d-4d27-b6b0-ac9825530a13" (UID: "4233545e-957d-4d27-b6b0-ac9825530a13"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.738476 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" (UID: "bc3fd122-9e81-4a61-b8c7-c955fbbc23c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.743754 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54ddcb9945-p7pkt" event={"ID":"e632994f-7397-4c6f-950a-bcdff946d4e2","Type":"ContainerStarted","Data":"a9c0afbf01dc3a253f674417908c7cc82bcb9c8919fb754dbc66d444db7629af"} Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.747316 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"af7f9ee8-d267-4aec-b95e-6ecfea75264d","Type":"ContainerDied","Data":"4e09d8563757babc692f08d8025ec86e575f7f9e4b64158fc9e3f0b6a5d727d5"} Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.747363 4832 scope.go:117] "RemoveContainer" containerID="3838a661e97d7f8bdd4a93f0b9a1965b25d9c8c9500447667442368401678416" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.747508 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.751482 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pwlwm" event={"ID":"4233545e-957d-4d27-b6b0-ac9825530a13","Type":"ContainerDied","Data":"e87246ecb4c4ed6bf1c5c67dcdacb6364cf2ae890ccb50007cf1c3aee1b66e21"} Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.751534 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e87246ecb4c4ed6bf1c5c67dcdacb6364cf2ae890ccb50007cf1c3aee1b66e21" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.751616 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pwlwm" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.758467 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-config-data" (OuterVolumeSpecName: "config-data") pod "bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" (UID: "bc3fd122-9e81-4a61-b8c7-c955fbbc23c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.764874 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bc3fd122-9e81-4a61-b8c7-c955fbbc23c7","Type":"ContainerDied","Data":"4c4bce92e8e1e492b20cdf31ac644c96443e51917d427a9f344f122f8343f77c"} Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.764970 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.768946 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af7f9ee8-d267-4aec-b95e-6ecfea75264d" (UID: "af7f9ee8-d267-4aec-b95e-6ecfea75264d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775477 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775521 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775532 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-logs\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775544 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4233545e-957d-4d27-b6b0-ac9825530a13-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775554 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775563 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24w48\" (UniqueName: \"kubernetes.io/projected/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-kube-api-access-24w48\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775571 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775579 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af7f9ee8-d267-4aec-b95e-6ecfea75264d-logs\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775587 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775594 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/af7f9ee8-d267-4aec-b95e-6ecfea75264d-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775603 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm49d\" (UniqueName: \"kubernetes.io/projected/4233545e-957d-4d27-b6b0-ac9825530a13-kube-api-access-qm49d\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775611 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775624 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775632 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.775640 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blb7n\" (UniqueName: \"kubernetes.io/projected/af7f9ee8-d267-4aec-b95e-6ecfea75264d-kube-api-access-blb7n\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.781478 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-config-data" (OuterVolumeSpecName: "config-data") pod "af7f9ee8-d267-4aec-b95e-6ecfea75264d" (UID: "af7f9ee8-d267-4aec-b95e-6ecfea75264d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.790341 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4233545e-957d-4d27-b6b0-ac9825530a13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4233545e-957d-4d27-b6b0-ac9825530a13" (UID: "4233545e-957d-4d27-b6b0-ac9825530a13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.799025 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.805249 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.878245 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.878294 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4233545e-957d-4d27-b6b0-ac9825530a13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.878306 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.878315 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af7f9ee8-d267-4aec-b95e-6ecfea75264d-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.889345 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.912949 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.925720 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:42:16 crc kubenswrapper[4832]: E1002 18:42:16.926323 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" containerName="glance-httpd" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.926348 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" containerName="glance-httpd" Oct 02 18:42:16 crc kubenswrapper[4832]: E1002 18:42:16.926362 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4233545e-957d-4d27-b6b0-ac9825530a13" containerName="barbican-db-sync" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.926370 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4233545e-957d-4d27-b6b0-ac9825530a13" containerName="barbican-db-sync" Oct 02 18:42:16 crc kubenswrapper[4832]: E1002 18:42:16.926382 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" containerName="glance-log" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.926389 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" containerName="glance-log" Oct 02 18:42:16 crc kubenswrapper[4832]: E1002 18:42:16.926416 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7f9ee8-d267-4aec-b95e-6ecfea75264d" containerName="glance-httpd" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.926422 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7f9ee8-d267-4aec-b95e-6ecfea75264d" containerName="glance-httpd" Oct 02 18:42:16 crc kubenswrapper[4832]: E1002 18:42:16.926435 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af7f9ee8-d267-4aec-b95e-6ecfea75264d" containerName="glance-log" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.926442 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="af7f9ee8-d267-4aec-b95e-6ecfea75264d" containerName="glance-log" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.926701 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" containerName="glance-httpd" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.926726 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="af7f9ee8-d267-4aec-b95e-6ecfea75264d" containerName="glance-log" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.926739 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="af7f9ee8-d267-4aec-b95e-6ecfea75264d" containerName="glance-httpd" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.926755 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4233545e-957d-4d27-b6b0-ac9825530a13" containerName="barbican-db-sync" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.926770 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" containerName="glance-log" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.928319 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.931667 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.931725 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 02 18:42:16 crc kubenswrapper[4832]: I1002 18:42:16.943441 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.081551 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de5b2270-9247-4b59-873f-00cdf454635c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.081638 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-config-data\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.081660 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-scripts\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.081718 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v5g8\" (UniqueName: \"kubernetes.io/projected/de5b2270-9247-4b59-873f-00cdf454635c-kube-api-access-2v5g8\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.081755 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.081770 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.081791 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.081810 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5b2270-9247-4b59-873f-00cdf454635c-logs\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.092411 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.104296 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.125170 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.131593 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.145152 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.146940 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.152688 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.183648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v5g8\" (UniqueName: \"kubernetes.io/projected/de5b2270-9247-4b59-873f-00cdf454635c-kube-api-access-2v5g8\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.184074 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.184111 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.184134 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.184154 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5b2270-9247-4b59-873f-00cdf454635c-logs\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.184305 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de5b2270-9247-4b59-873f-00cdf454635c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.184411 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-config-data\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.184432 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-scripts\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.185821 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5b2270-9247-4b59-873f-00cdf454635c-logs\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.185925 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.189293 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de5b2270-9247-4b59-873f-00cdf454635c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.195277 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-config-data\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.195300 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-scripts\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.195393 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.195891 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.209607 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v5g8\" (UniqueName: \"kubernetes.io/projected/de5b2270-9247-4b59-873f-00cdf454635c-kube-api-access-2v5g8\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.217742 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.250069 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af7f9ee8-d267-4aec-b95e-6ecfea75264d" path="/var/lib/kubelet/pods/af7f9ee8-d267-4aec-b95e-6ecfea75264d/volumes" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.251005 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc3fd122-9e81-4a61-b8c7-c955fbbc23c7" path="/var/lib/kubelet/pods/bc3fd122-9e81-4a61-b8c7-c955fbbc23c7/volumes" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.263100 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.286145 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.286239 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhnlh\" (UniqueName: \"kubernetes.io/projected/eb31897d-9d37-446e-9cde-08d0e12fc428-kube-api-access-dhnlh\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.286537 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.286587 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.286608 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb31897d-9d37-446e-9cde-08d0e12fc428-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.286641 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.286783 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.286803 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb31897d-9d37-446e-9cde-08d0e12fc428-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.388982 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.389050 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb31897d-9d37-446e-9cde-08d0e12fc428-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.389138 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.389201 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhnlh\" (UniqueName: \"kubernetes.io/projected/eb31897d-9d37-446e-9cde-08d0e12fc428-kube-api-access-dhnlh\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.389304 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.389329 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.389369 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb31897d-9d37-446e-9cde-08d0e12fc428-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.389410 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.390447 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.390875 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb31897d-9d37-446e-9cde-08d0e12fc428-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.390951 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb31897d-9d37-446e-9cde-08d0e12fc428-logs\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.394329 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.394620 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.394966 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.395451 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.411621 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhnlh\" (UniqueName: \"kubernetes.io/projected/eb31897d-9d37-446e-9cde-08d0e12fc428-kube-api-access-dhnlh\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.418117 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.456841 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.867520 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5fd9c9bb87-rf75p"] Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.869970 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.874031 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.874305 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-h8fnf" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.874461 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.889867 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-855fbd5c98-k2t4b"] Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.893063 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.895422 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.930053 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fd9c9bb87-rf75p"] Oct 02 18:42:17 crc kubenswrapper[4832]: I1002 18:42:17.956747 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-855fbd5c98-k2t4b"] Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.001337 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-74rdq"] Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.003818 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.004590 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqkpb\" (UniqueName: \"kubernetes.io/projected/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-kube-api-access-hqkpb\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.004638 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-combined-ca-bundle\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.004689 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-combined-ca-bundle\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.004713 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-config-data-custom\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.004738 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-config-data-custom\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.004762 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-logs\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.004809 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pdz9\" (UniqueName: \"kubernetes.io/projected/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-kube-api-access-8pdz9\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.004834 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-config-data\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.004867 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-logs\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.004903 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-config-data\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.040236 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-74rdq"] Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109520 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqkpb\" (UniqueName: \"kubernetes.io/projected/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-kube-api-access-hqkpb\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109564 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-config\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109585 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-combined-ca-bundle\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109602 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sm8r\" (UniqueName: \"kubernetes.io/projected/7520e8d3-0462-4184-9198-8620f4bf0684-kube-api-access-7sm8r\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-combined-ca-bundle\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109673 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-config-data-custom\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109701 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-config-data-custom\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109722 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-logs\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109763 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109787 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pdz9\" (UniqueName: \"kubernetes.io/projected/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-kube-api-access-8pdz9\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109811 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-config-data\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109831 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109846 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109875 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-logs\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109899 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-config-data\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.109917 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.119723 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-logs\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.121652 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-logs\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.126379 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-config-data\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.128860 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7cc9b5f5c6-zlcjl"] Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.130848 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-config-data-custom\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.130901 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.131306 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-combined-ca-bundle\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.131489 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-combined-ca-bundle\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.132425 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-config-data\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.135174 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-config-data-custom\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.136117 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.151928 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pdz9\" (UniqueName: \"kubernetes.io/projected/f944fb96-3cf4-42b3-b5b8-3da8dc107d7c-kube-api-access-8pdz9\") pod \"barbican-worker-5fd9c9bb87-rf75p\" (UID: \"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c\") " pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.152713 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqkpb\" (UniqueName: \"kubernetes.io/projected/32db7ef2-6bb9-4834-9c9d-3bb13309b0e9-kube-api-access-hqkpb\") pod \"barbican-keystone-listener-855fbd5c98-k2t4b\" (UID: \"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9\") " pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.173622 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cc9b5f5c6-zlcjl"] Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.207851 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5fd9c9bb87-rf75p" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.212507 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.212591 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.212620 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-config-data-custom\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.212645 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.212702 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.212786 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-config-data\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.212832 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-combined-ca-bundle\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.212860 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-config\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.212884 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sm8r\" (UniqueName: \"kubernetes.io/projected/7520e8d3-0462-4184-9198-8620f4bf0684-kube-api-access-7sm8r\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.212924 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-logs\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.212950 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-458cq\" (UniqueName: \"kubernetes.io/projected/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-kube-api-access-458cq\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.218211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.218801 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.222956 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.222956 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-config\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.226389 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.244707 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.250079 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sm8r\" (UniqueName: \"kubernetes.io/projected/7520e8d3-0462-4184-9198-8620f4bf0684-kube-api-access-7sm8r\") pod \"dnsmasq-dns-586bdc5f9-74rdq\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.315005 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-config-data-custom\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.315111 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-config-data\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.315145 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-combined-ca-bundle\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.315178 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-logs\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.315200 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-458cq\" (UniqueName: \"kubernetes.io/projected/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-kube-api-access-458cq\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.320585 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-logs\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.322012 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-combined-ca-bundle\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.324081 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-config-data-custom\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.324308 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-config-data\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.334347 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-458cq\" (UniqueName: \"kubernetes.io/projected/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-kube-api-access-458cq\") pod \"barbican-api-7cc9b5f5c6-zlcjl\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.346133 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:18 crc kubenswrapper[4832]: I1002 18:42:18.601874 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.328277 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-575ff4d8db-jrg4j"] Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.332767 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.337846 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.338065 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.350245 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-575ff4d8db-jrg4j"] Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.498976 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69xqn\" (UniqueName: \"kubernetes.io/projected/a65ae528-fb46-44a4-a3a3-543acfb646a9-kube-api-access-69xqn\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.499501 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-public-tls-certs\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.499594 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-config-data\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.499664 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a65ae528-fb46-44a4-a3a3-543acfb646a9-logs\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.499714 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-internal-tls-certs\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.499738 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-config-data-custom\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.499755 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-combined-ca-bundle\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.601587 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-internal-tls-certs\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.601632 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-config-data-custom\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.601651 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-combined-ca-bundle\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.601686 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69xqn\" (UniqueName: \"kubernetes.io/projected/a65ae528-fb46-44a4-a3a3-543acfb646a9-kube-api-access-69xqn\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.601798 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-public-tls-certs\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.601849 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-config-data\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.601919 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a65ae528-fb46-44a4-a3a3-543acfb646a9-logs\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.602428 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a65ae528-fb46-44a4-a3a3-543acfb646a9-logs\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.615057 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-combined-ca-bundle\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.616709 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-public-tls-certs\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.618548 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-config-data\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.622693 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-internal-tls-certs\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.629322 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a65ae528-fb46-44a4-a3a3-543acfb646a9-config-data-custom\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.632085 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69xqn\" (UniqueName: \"kubernetes.io/projected/a65ae528-fb46-44a4-a3a3-543acfb646a9-kube-api-access-69xqn\") pod \"barbican-api-575ff4d8db-jrg4j\" (UID: \"a65ae528-fb46-44a4-a3a3-543acfb646a9\") " pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:20 crc kubenswrapper[4832]: I1002 18:42:20.651675 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:24 crc kubenswrapper[4832]: I1002 18:42:24.863856 4832 generic.go:334] "Generic (PLEG): container finished" podID="77fb37d1-dfa6-4ade-9bad-6263a7f22277" containerID="7d2e251dafc0c96bb3b4fa434da9173febda388b9577597e33fa89faa96abb1d" exitCode=0 Oct 02 18:42:24 crc kubenswrapper[4832]: I1002 18:42:24.863931 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-h97bp" event={"ID":"77fb37d1-dfa6-4ade-9bad-6263a7f22277","Type":"ContainerDied","Data":"7d2e251dafc0c96bb3b4fa434da9173febda388b9577597e33fa89faa96abb1d"} Oct 02 18:42:25 crc kubenswrapper[4832]: I1002 18:42:25.846866 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" podUID="b5f9b41f-3101-4516-99bd-1612910e0e3c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.178:5353: i/o timeout" Oct 02 18:42:27 crc kubenswrapper[4832]: E1002 18:42:27.311490 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Oct 02 18:42:27 crc kubenswrapper[4832]: E1002 18:42:27.311795 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nbrfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-5gtbq_openstack(f03462b3-a4a5-441c-93c5-1f0008d95f21): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:42:27 crc kubenswrapper[4832]: E1002 18:42:27.313159 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-5gtbq" podUID="f03462b3-a4a5-441c-93c5-1f0008d95f21" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.366147 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.380530 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-ovsdbserver-sb\") pod \"b5f9b41f-3101-4516-99bd-1612910e0e3c\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.380703 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-dns-svc\") pod \"b5f9b41f-3101-4516-99bd-1612910e0e3c\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.380790 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmv62\" (UniqueName: \"kubernetes.io/projected/b5f9b41f-3101-4516-99bd-1612910e0e3c-kube-api-access-lmv62\") pod \"b5f9b41f-3101-4516-99bd-1612910e0e3c\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.380825 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-config\") pod \"b5f9b41f-3101-4516-99bd-1612910e0e3c\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.380874 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-dns-swift-storage-0\") pod \"b5f9b41f-3101-4516-99bd-1612910e0e3c\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.380960 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-ovsdbserver-nb\") pod \"b5f9b41f-3101-4516-99bd-1612910e0e3c\" (UID: \"b5f9b41f-3101-4516-99bd-1612910e0e3c\") " Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.387167 4832 scope.go:117] "RemoveContainer" containerID="9dd193bd3208ab3edb1ce07a65abb390d1611f19a5613b580a1b8d1721ff8712" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.403338 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f9b41f-3101-4516-99bd-1612910e0e3c-kube-api-access-lmv62" (OuterVolumeSpecName: "kube-api-access-lmv62") pod "b5f9b41f-3101-4516-99bd-1612910e0e3c" (UID: "b5f9b41f-3101-4516-99bd-1612910e0e3c"). InnerVolumeSpecName "kube-api-access-lmv62". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.470477 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-config" (OuterVolumeSpecName: "config") pod "b5f9b41f-3101-4516-99bd-1612910e0e3c" (UID: "b5f9b41f-3101-4516-99bd-1612910e0e3c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.480104 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b5f9b41f-3101-4516-99bd-1612910e0e3c" (UID: "b5f9b41f-3101-4516-99bd-1612910e0e3c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.484518 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmv62\" (UniqueName: \"kubernetes.io/projected/b5f9b41f-3101-4516-99bd-1612910e0e3c-kube-api-access-lmv62\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.484552 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.484564 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.493591 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b5f9b41f-3101-4516-99bd-1612910e0e3c" (UID: "b5f9b41f-3101-4516-99bd-1612910e0e3c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.494117 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b5f9b41f-3101-4516-99bd-1612910e0e3c" (UID: "b5f9b41f-3101-4516-99bd-1612910e0e3c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.498204 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b5f9b41f-3101-4516-99bd-1612910e0e3c" (UID: "b5f9b41f-3101-4516-99bd-1612910e0e3c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.546459 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-h97bp" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.585931 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/77fb37d1-dfa6-4ade-9bad-6263a7f22277-config\") pod \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\" (UID: \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\") " Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.586091 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fb37d1-dfa6-4ade-9bad-6263a7f22277-combined-ca-bundle\") pod \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\" (UID: \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\") " Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.586251 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l55bk\" (UniqueName: \"kubernetes.io/projected/77fb37d1-dfa6-4ade-9bad-6263a7f22277-kube-api-access-l55bk\") pod \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\" (UID: \"77fb37d1-dfa6-4ade-9bad-6263a7f22277\") " Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.586815 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.586838 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.586852 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5f9b41f-3101-4516-99bd-1612910e0e3c-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.590035 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77fb37d1-dfa6-4ade-9bad-6263a7f22277-kube-api-access-l55bk" (OuterVolumeSpecName: "kube-api-access-l55bk") pod "77fb37d1-dfa6-4ade-9bad-6263a7f22277" (UID: "77fb37d1-dfa6-4ade-9bad-6263a7f22277"). InnerVolumeSpecName "kube-api-access-l55bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.612076 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77fb37d1-dfa6-4ade-9bad-6263a7f22277-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77fb37d1-dfa6-4ade-9bad-6263a7f22277" (UID: "77fb37d1-dfa6-4ade-9bad-6263a7f22277"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.620008 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77fb37d1-dfa6-4ade-9bad-6263a7f22277-config" (OuterVolumeSpecName: "config") pod "77fb37d1-dfa6-4ade-9bad-6263a7f22277" (UID: "77fb37d1-dfa6-4ade-9bad-6263a7f22277"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.690530 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l55bk\" (UniqueName: \"kubernetes.io/projected/77fb37d1-dfa6-4ade-9bad-6263a7f22277-kube-api-access-l55bk\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.690812 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/77fb37d1-dfa6-4ade-9bad-6263a7f22277-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.690824 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77fb37d1-dfa6-4ade-9bad-6263a7f22277-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.754911 4832 scope.go:117] "RemoveContainer" containerID="c61c30ecff5b30226c9b1ae78c48dc669d21fd96ca080b25d55d1f781470a3c2" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.837331 4832 scope.go:117] "RemoveContainer" containerID="6ba5b0835f7f50a4495a70f5bb9c1bf87745d2a097317fe4cb779d759db8096d" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.860292 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7975695b86-g5x7n"] Oct 02 18:42:27 crc kubenswrapper[4832]: W1002 18:42:27.922328 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b786a5b_55e1_4e5b_aa0d_fe00ae8f524f.slice/crio-b1e7981d6673d95e300e80ee2e07834ed01c034fa1cf7c93ea18ef46a6b1e379 WatchSource:0}: Error finding container b1e7981d6673d95e300e80ee2e07834ed01c034fa1cf7c93ea18ef46a6b1e379: Status 404 returned error can't find the container with id b1e7981d6673d95e300e80ee2e07834ed01c034fa1cf7c93ea18ef46a6b1e379 Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.970877 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" event={"ID":"b5f9b41f-3101-4516-99bd-1612910e0e3c","Type":"ContainerDied","Data":"1a2f92613de25f0c256b4b48bdf0bb14df8b269c6455d223f56aac2eefd29fc5"} Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.971167 4832 scope.go:117] "RemoveContainer" containerID="ee794b95261858731463b6ca50aca7ff5e72fea32238539403421f9cff975e46" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.971115 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.984912 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-h97bp" event={"ID":"77fb37d1-dfa6-4ade-9bad-6263a7f22277","Type":"ContainerDied","Data":"a61d4f9bc2362c20c2262ca59477c10676314611818d7a803469de8a4ab26434"} Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.985074 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a61d4f9bc2362c20c2262ca59477c10676314611818d7a803469de8a4ab26434" Oct 02 18:42:27 crc kubenswrapper[4832]: I1002 18:42:27.984931 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-h97bp" Oct 02 18:42:27 crc kubenswrapper[4832]: E1002 18:42:27.993114 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-5gtbq" podUID="f03462b3-a4a5-441c-93c5-1f0008d95f21" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.039105 4832 scope.go:117] "RemoveContainer" containerID="a1e5c61e4fefa0831d3c97b48270c614c9603de91eb3b95bcc1ef636438253a0" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.138583 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-xft64"] Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.151841 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-xft64"] Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.550821 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-74rdq"] Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.653854 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.800123 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-74rdq"] Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.839254 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pk84v"] Oct 02 18:42:28 crc kubenswrapper[4832]: E1002 18:42:28.839685 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f9b41f-3101-4516-99bd-1612910e0e3c" containerName="dnsmasq-dns" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.839702 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f9b41f-3101-4516-99bd-1612910e0e3c" containerName="dnsmasq-dns" Oct 02 18:42:28 crc kubenswrapper[4832]: E1002 18:42:28.839721 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77fb37d1-dfa6-4ade-9bad-6263a7f22277" containerName="neutron-db-sync" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.839728 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="77fb37d1-dfa6-4ade-9bad-6263a7f22277" containerName="neutron-db-sync" Oct 02 18:42:28 crc kubenswrapper[4832]: E1002 18:42:28.839771 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f9b41f-3101-4516-99bd-1612910e0e3c" containerName="init" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.839776 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f9b41f-3101-4516-99bd-1612910e0e3c" containerName="init" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.839966 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f9b41f-3101-4516-99bd-1612910e0e3c" containerName="dnsmasq-dns" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.839980 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="77fb37d1-dfa6-4ade-9bad-6263a7f22277" containerName="neutron-db-sync" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.844785 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.870524 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pk84v"] Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.933467 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.933516 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.933562 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.933594 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-config\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.933673 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l227d\" (UniqueName: \"kubernetes.io/projected/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-kube-api-access-l227d\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.933696 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-dns-svc\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:28 crc kubenswrapper[4832]: I1002 18:42:28.998361 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64945b8848-4m4pr"] Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.020607 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.037580 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-httpd-config\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.037725 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.037750 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.037798 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28sb5\" (UniqueName: \"kubernetes.io/projected/8d4d6baa-ddff-4604-b621-0b875056aa02-kube-api-access-28sb5\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.037840 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.037895 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-config\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.038052 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l227d\" (UniqueName: \"kubernetes.io/projected/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-kube-api-access-l227d\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.038093 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-dns-svc\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.038114 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-config\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.038152 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-ovndb-tls-certs\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.038182 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-combined-ca-bundle\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.039113 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.039670 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.042217 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.042841 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-config\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.043783 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-dns-svc\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.060320 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64945b8848-4m4pr"] Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.066001 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.066450 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.066777 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7dgdj" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.067368 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.075229 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l227d\" (UniqueName: \"kubernetes.io/projected/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-kube-api-access-l227d\") pod \"dnsmasq-dns-85ff748b95-pk84v\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.105390 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" event={"ID":"7520e8d3-0462-4184-9198-8620f4bf0684","Type":"ContainerStarted","Data":"7771b67c28502ed54b4e1aa5e41da490ee58d853327df1777581f8e99452b794"} Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.105451 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" event={"ID":"7520e8d3-0462-4184-9198-8620f4bf0684","Type":"ContainerStarted","Data":"be915e8d3177a9c2cbf027892f0a64672ec3a360922d915fbd40066550634cc5"} Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.133213 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54ddcb9945-p7pkt" event={"ID":"e632994f-7397-4c6f-950a-bcdff946d4e2","Type":"ContainerStarted","Data":"5ff21eee78b1bc846b2188414bde2f3a269484bb410b17706fdd98a502435b6c"} Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.133711 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.140778 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-config\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.140828 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-ovndb-tls-certs\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.140858 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-combined-ca-bundle\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.140895 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-httpd-config\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.140975 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28sb5\" (UniqueName: \"kubernetes.io/projected/8d4d6baa-ddff-4604-b621-0b875056aa02-kube-api-access-28sb5\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.143776 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7975695b86-g5x7n" event={"ID":"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f","Type":"ContainerStarted","Data":"afd183ddbce5ce65edfc0d03cf57b8c26d3e24a23497f6c851d1bd363b202511"} Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.144108 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7975695b86-g5x7n" event={"ID":"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f","Type":"ContainerStarted","Data":"01e6cc81fd6018498a5c1c99ce8e37d7f656db3b623d323e320db1bec641d728"} Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.144136 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7975695b86-g5x7n" event={"ID":"9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f","Type":"ContainerStarted","Data":"b1e7981d6673d95e300e80ee2e07834ed01c034fa1cf7c93ea18ef46a6b1e379"} Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.144906 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.145029 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.146232 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-config\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.151108 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-ovndb-tls-certs\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.153966 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-httpd-config\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.165491 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-combined-ca-bundle\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.181761 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5","Type":"ContainerStarted","Data":"5b303e33f461930cec9b27ece469261568837d96c9c89c1fe1e6cd6a5aa3b788"} Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.183430 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.188745 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28sb5\" (UniqueName: \"kubernetes.io/projected/8d4d6baa-ddff-4604-b621-0b875056aa02-kube-api-access-28sb5\") pod \"neutron-64945b8848-4m4pr\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.192219 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"de5b2270-9247-4b59-873f-00cdf454635c","Type":"ContainerStarted","Data":"8ad6520891ce4b941e519b3a29d198b5dbc59dc9c7274945e50bd798d6bf0e76"} Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.208204 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dx58r" event={"ID":"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4","Type":"ContainerStarted","Data":"698459ce9def4a3003acfbd3a6740f3df798590fe26c13dbfbc0b0d862ce0d61"} Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.268602 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-54ddcb9945-p7pkt" podStartSLOduration=20.26858119 podStartE2EDuration="20.26858119s" podCreationTimestamp="2025-10-02 18:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:29.177867255 +0000 UTC m=+1306.147310127" watchObservedRunningTime="2025-10-02 18:42:29.26858119 +0000 UTC m=+1306.238024062" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.317006 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f9b41f-3101-4516-99bd-1612910e0e3c" path="/var/lib/kubelet/pods/b5f9b41f-3101-4516-99bd-1612910e0e3c/volumes" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.317644 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-575ff4d8db-jrg4j"] Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.364300 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-855fbd5c98-k2t4b"] Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.385256 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7cc9b5f5c6-zlcjl"] Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.410697 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7975695b86-g5x7n" podStartSLOduration=19.410678137 podStartE2EDuration="19.410678137s" podCreationTimestamp="2025-10-02 18:42:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:29.207311833 +0000 UTC m=+1306.176754705" watchObservedRunningTime="2025-10-02 18:42:29.410678137 +0000 UTC m=+1306.380121009" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.420322 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.536313 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5fd9c9bb87-rf75p"] Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.571179 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-dx58r" podStartSLOduration=4.697886224 podStartE2EDuration="49.571156101s" podCreationTimestamp="2025-10-02 18:41:40 +0000 UTC" firstStartedPulling="2025-10-02 18:41:42.944366948 +0000 UTC m=+1259.913809820" lastFinishedPulling="2025-10-02 18:42:27.817636825 +0000 UTC m=+1304.787079697" observedRunningTime="2025-10-02 18:42:29.242771325 +0000 UTC m=+1306.212214197" watchObservedRunningTime="2025-10-02 18:42:29.571156101 +0000 UTC m=+1306.540598973" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.620429 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.931917 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.968067 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sm8r\" (UniqueName: \"kubernetes.io/projected/7520e8d3-0462-4184-9198-8620f4bf0684-kube-api-access-7sm8r\") pod \"7520e8d3-0462-4184-9198-8620f4bf0684\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.968187 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-dns-svc\") pod \"7520e8d3-0462-4184-9198-8620f4bf0684\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.968235 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-ovsdbserver-sb\") pod \"7520e8d3-0462-4184-9198-8620f4bf0684\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.968291 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-dns-swift-storage-0\") pod \"7520e8d3-0462-4184-9198-8620f4bf0684\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.968376 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-config\") pod \"7520e8d3-0462-4184-9198-8620f4bf0684\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.968425 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-ovsdbserver-nb\") pod \"7520e8d3-0462-4184-9198-8620f4bf0684\" (UID: \"7520e8d3-0462-4184-9198-8620f4bf0684\") " Oct 02 18:42:29 crc kubenswrapper[4832]: I1002 18:42:29.982059 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7520e8d3-0462-4184-9198-8620f4bf0684-kube-api-access-7sm8r" (OuterVolumeSpecName: "kube-api-access-7sm8r") pod "7520e8d3-0462-4184-9198-8620f4bf0684" (UID: "7520e8d3-0462-4184-9198-8620f4bf0684"). InnerVolumeSpecName "kube-api-access-7sm8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.036709 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7520e8d3-0462-4184-9198-8620f4bf0684" (UID: "7520e8d3-0462-4184-9198-8620f4bf0684"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.036710 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7520e8d3-0462-4184-9198-8620f4bf0684" (UID: "7520e8d3-0462-4184-9198-8620f4bf0684"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.036829 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7520e8d3-0462-4184-9198-8620f4bf0684" (UID: "7520e8d3-0462-4184-9198-8620f4bf0684"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.058769 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7520e8d3-0462-4184-9198-8620f4bf0684" (UID: "7520e8d3-0462-4184-9198-8620f4bf0684"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.058873 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-config" (OuterVolumeSpecName: "config") pod "7520e8d3-0462-4184-9198-8620f4bf0684" (UID: "7520e8d3-0462-4184-9198-8620f4bf0684"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.069812 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sm8r\" (UniqueName: \"kubernetes.io/projected/7520e8d3-0462-4184-9198-8620f4bf0684-kube-api-access-7sm8r\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.069846 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.069856 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.069865 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.069876 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.069885 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7520e8d3-0462-4184-9198-8620f4bf0684-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.112313 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pk84v"] Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.230167 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"de5b2270-9247-4b59-873f-00cdf454635c","Type":"ContainerStarted","Data":"674cd898d8ed1fb6b791a4a0c3f86da99d16b7746346d4f502e66d6207740fc5"} Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.232155 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-575ff4d8db-jrg4j" event={"ID":"a65ae528-fb46-44a4-a3a3-543acfb646a9","Type":"ContainerStarted","Data":"73d4c98d7ec101e412a0fcc3fb254b19ee1895baf9497342ff977f0504ce5ed3"} Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.232182 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-575ff4d8db-jrg4j" event={"ID":"a65ae528-fb46-44a4-a3a3-543acfb646a9","Type":"ContainerStarted","Data":"eb61c2c73dd13ece14d2e97fa20bdf498e50cfdd080824a8d1dd815e5d34816a"} Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.239335 4832 generic.go:334] "Generic (PLEG): container finished" podID="7520e8d3-0462-4184-9198-8620f4bf0684" containerID="7771b67c28502ed54b4e1aa5e41da490ee58d853327df1777581f8e99452b794" exitCode=0 Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.239394 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" event={"ID":"7520e8d3-0462-4184-9198-8620f4bf0684","Type":"ContainerDied","Data":"7771b67c28502ed54b4e1aa5e41da490ee58d853327df1777581f8e99452b794"} Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.239416 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" event={"ID":"7520e8d3-0462-4184-9198-8620f4bf0684","Type":"ContainerDied","Data":"be915e8d3177a9c2cbf027892f0a64672ec3a360922d915fbd40066550634cc5"} Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.239446 4832 scope.go:117] "RemoveContainer" containerID="7771b67c28502ed54b4e1aa5e41da490ee58d853327df1777581f8e99452b794" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.239575 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-74rdq" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.243069 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" event={"ID":"86aa56ca-c6e9-4382-a9aa-fea6afc94ade","Type":"ContainerStarted","Data":"21203fcf759b2e62d1e4f1e91418983405c4e8a3dc3118ded19e2a6d0cc2c7e6"} Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.246657 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd9c9bb87-rf75p" event={"ID":"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c","Type":"ContainerStarted","Data":"c4d963145300213eab2a921100b34a7811bd313c2d9855e7232ab97c59961e16"} Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.251336 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb31897d-9d37-446e-9cde-08d0e12fc428","Type":"ContainerStarted","Data":"ea7c56892141aab76ca9ced69b3b9f173f9a3ead307a7f80a203fd868d41f0b1"} Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.262820 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" event={"ID":"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8","Type":"ContainerStarted","Data":"5c0554b1333725dcd8e21b5886491998ee0189fac9bbd11406c4e5cc5fec2ff3"} Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.262869 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" event={"ID":"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8","Type":"ContainerStarted","Data":"91512546da4e389809a49c2f1f61e91fbf4b57f7d873f768bea98cca62656b61"} Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.266579 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" event={"ID":"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9","Type":"ContainerStarted","Data":"675dc070b2b64a264811d6623d22fad664cf229eff4cbc429f58c6348e71c664"} Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.430451 4832 scope.go:117] "RemoveContainer" containerID="7771b67c28502ed54b4e1aa5e41da490ee58d853327df1777581f8e99452b794" Oct 02 18:42:30 crc kubenswrapper[4832]: E1002 18:42:30.431305 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7771b67c28502ed54b4e1aa5e41da490ee58d853327df1777581f8e99452b794\": container with ID starting with 7771b67c28502ed54b4e1aa5e41da490ee58d853327df1777581f8e99452b794 not found: ID does not exist" containerID="7771b67c28502ed54b4e1aa5e41da490ee58d853327df1777581f8e99452b794" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.431338 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7771b67c28502ed54b4e1aa5e41da490ee58d853327df1777581f8e99452b794"} err="failed to get container status \"7771b67c28502ed54b4e1aa5e41da490ee58d853327df1777581f8e99452b794\": rpc error: code = NotFound desc = could not find container \"7771b67c28502ed54b4e1aa5e41da490ee58d853327df1777581f8e99452b794\": container with ID starting with 7771b67c28502ed54b4e1aa5e41da490ee58d853327df1777581f8e99452b794 not found: ID does not exist" Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.473916 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64945b8848-4m4pr"] Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.672410 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-74rdq"] Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.705854 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-74rdq"] Oct 02 18:42:30 crc kubenswrapper[4832]: I1002 18:42:30.854530 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58dd9ff6bc-xft64" podUID="b5f9b41f-3101-4516-99bd-1612910e0e3c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.178:5353: i/o timeout" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.284232 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7520e8d3-0462-4184-9198-8620f4bf0684" path="/var/lib/kubelet/pods/7520e8d3-0462-4184-9198-8620f4bf0684/volumes" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.361551 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb31897d-9d37-446e-9cde-08d0e12fc428","Type":"ContainerStarted","Data":"17957db41c61486cfbf8f9a2a92ca6780f754ee3a70507cee0ff7529315f76a7"} Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.366840 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" event={"ID":"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8","Type":"ContainerStarted","Data":"b2ffb8e3cf2d04903ad74d7ee138f5e6226094a86545b21b38263e742c409de7"} Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.366897 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.366959 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.370730 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64945b8848-4m4pr" event={"ID":"8d4d6baa-ddff-4604-b621-0b875056aa02","Type":"ContainerStarted","Data":"c82756cc52fc5e198d4fa1e83a718f56d875ce1e120aa93bd487562fe61d6898"} Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.370791 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64945b8848-4m4pr" event={"ID":"8d4d6baa-ddff-4604-b621-0b875056aa02","Type":"ContainerStarted","Data":"8b624969f20d57a2f2c3d5fb287da274c9e468310b1ad737d597085457781d2e"} Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.372905 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-575ff4d8db-jrg4j" event={"ID":"a65ae528-fb46-44a4-a3a3-543acfb646a9","Type":"ContainerStarted","Data":"e601f880e8b77a80c7432fb862769ac01e5c28e686180194d40baced6d2a2207"} Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.373867 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.373902 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.391022 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" podStartSLOduration=13.39100557 podStartE2EDuration="13.39100557s" podCreationTimestamp="2025-10-02 18:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:31.389622417 +0000 UTC m=+1308.359065289" watchObservedRunningTime="2025-10-02 18:42:31.39100557 +0000 UTC m=+1308.360448442" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.397107 4832 generic.go:334] "Generic (PLEG): container finished" podID="86aa56ca-c6e9-4382-a9aa-fea6afc94ade" containerID="af9d17e96a559e814fd35a89f639b161c7775a1542d35b3efa562dc15bc72b7f" exitCode=0 Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.397139 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" event={"ID":"86aa56ca-c6e9-4382-a9aa-fea6afc94ade","Type":"ContainerDied","Data":"af9d17e96a559e814fd35a89f639b161c7775a1542d35b3efa562dc15bc72b7f"} Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.444397 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-575ff4d8db-jrg4j" podStartSLOduration=11.444380294 podStartE2EDuration="11.444380294s" podCreationTimestamp="2025-10-02 18:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:31.411351307 +0000 UTC m=+1308.380794189" watchObservedRunningTime="2025-10-02 18:42:31.444380294 +0000 UTC m=+1308.413823166" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.616048 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-68769b5c9-9g8wt"] Oct 02 18:42:31 crc kubenswrapper[4832]: E1002 18:42:31.616609 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7520e8d3-0462-4184-9198-8620f4bf0684" containerName="init" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.616865 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7520e8d3-0462-4184-9198-8620f4bf0684" containerName="init" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.617493 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7520e8d3-0462-4184-9198-8620f4bf0684" containerName="init" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.622736 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.625356 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.625501 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.630407 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68769b5c9-9g8wt"] Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.749670 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-public-tls-certs\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.750018 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-internal-tls-certs\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.752322 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-combined-ca-bundle\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.752384 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmgwh\" (UniqueName: \"kubernetes.io/projected/eba53986-08b2-4e79-b3d9-85367ff7d816-kube-api-access-nmgwh\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.752480 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-ovndb-tls-certs\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.752670 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-config\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.753736 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-httpd-config\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.856815 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-public-tls-certs\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.857698 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-internal-tls-certs\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.857751 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-combined-ca-bundle\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.857790 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmgwh\" (UniqueName: \"kubernetes.io/projected/eba53986-08b2-4e79-b3d9-85367ff7d816-kube-api-access-nmgwh\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.857861 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-ovndb-tls-certs\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.857933 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-config\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.858045 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-httpd-config\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.862232 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-httpd-config\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.863163 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-public-tls-certs\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.866458 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-ovndb-tls-certs\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.867961 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-config\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.868600 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-combined-ca-bundle\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.871999 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eba53986-08b2-4e79-b3d9-85367ff7d816-internal-tls-certs\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:31 crc kubenswrapper[4832]: I1002 18:42:31.883653 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmgwh\" (UniqueName: \"kubernetes.io/projected/eba53986-08b2-4e79-b3d9-85367ff7d816-kube-api-access-nmgwh\") pod \"neutron-68769b5c9-9g8wt\" (UID: \"eba53986-08b2-4e79-b3d9-85367ff7d816\") " pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:32 crc kubenswrapper[4832]: I1002 18:42:32.052515 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:32 crc kubenswrapper[4832]: I1002 18:42:32.413634 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"de5b2270-9247-4b59-873f-00cdf454635c","Type":"ContainerStarted","Data":"be4cee0b8aabdef3e44030aa89f518ae7be1f57a8e2bbbc94da21de6214f7bab"} Oct 02 18:42:32 crc kubenswrapper[4832]: I1002 18:42:32.416308 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" event={"ID":"86aa56ca-c6e9-4382-a9aa-fea6afc94ade","Type":"ContainerStarted","Data":"7ac0278d3fe639ffd8498de78a1f4e384b19b9b4ce20cd83c97a09e7e0284c3a"} Oct 02 18:42:32 crc kubenswrapper[4832]: I1002 18:42:32.416494 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:32 crc kubenswrapper[4832]: I1002 18:42:32.419727 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb31897d-9d37-446e-9cde-08d0e12fc428","Type":"ContainerStarted","Data":"9c153939e750e4ee27dfcc15613342fa65a3aba2cd37f230b0096894e5a430bd"} Oct 02 18:42:32 crc kubenswrapper[4832]: I1002 18:42:32.422230 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64945b8848-4m4pr" event={"ID":"8d4d6baa-ddff-4604-b621-0b875056aa02","Type":"ContainerStarted","Data":"5a2fd4dedbe9c8ee177d5c1311806a637850a091353ca3f053f64e14211ade05"} Oct 02 18:42:32 crc kubenswrapper[4832]: I1002 18:42:32.442015 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.441995015 podStartE2EDuration="16.441995015s" podCreationTimestamp="2025-10-02 18:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:32.435077431 +0000 UTC m=+1309.404520303" watchObservedRunningTime="2025-10-02 18:42:32.441995015 +0000 UTC m=+1309.411437887" Oct 02 18:42:32 crc kubenswrapper[4832]: I1002 18:42:32.474997 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64945b8848-4m4pr" podStartSLOduration=4.474974621 podStartE2EDuration="4.474974621s" podCreationTimestamp="2025-10-02 18:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:32.464794317 +0000 UTC m=+1309.434237189" watchObservedRunningTime="2025-10-02 18:42:32.474974621 +0000 UTC m=+1309.444417493" Oct 02 18:42:32 crc kubenswrapper[4832]: I1002 18:42:32.493803 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" podStartSLOduration=4.493783 podStartE2EDuration="4.493783s" podCreationTimestamp="2025-10-02 18:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:32.484840325 +0000 UTC m=+1309.454283197" watchObservedRunningTime="2025-10-02 18:42:32.493783 +0000 UTC m=+1309.463225872" Oct 02 18:42:32 crc kubenswrapper[4832]: I1002 18:42:32.514852 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=15.514832668 podStartE2EDuration="15.514832668s" podCreationTimestamp="2025-10-02 18:42:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:32.50287671 +0000 UTC m=+1309.472319592" watchObservedRunningTime="2025-10-02 18:42:32.514832668 +0000 UTC m=+1309.484275540" Oct 02 18:42:33 crc kubenswrapper[4832]: I1002 18:42:33.445717 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.263983 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.264438 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.307249 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.320723 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.457525 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.457576 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.499686 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.499932 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.507561 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.508421 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.516214 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.858933 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.872088 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-575ff4d8db-jrg4j" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.938045 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7cc9b5f5c6-zlcjl"] Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.938592 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api-log" containerID="cri-o://5c0554b1333725dcd8e21b5886491998ee0189fac9bbd11406c4e5cc5fec2ff3" gracePeriod=30 Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.939336 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api" containerID="cri-o://b2ffb8e3cf2d04903ad74d7ee138f5e6226094a86545b21b38263e742c409de7" gracePeriod=30 Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.951065 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.195:9311/healthcheck\": EOF" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.951065 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.195:9311/healthcheck\": EOF" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.951077 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.195:9311/healthcheck\": EOF" Oct 02 18:42:37 crc kubenswrapper[4832]: I1002 18:42:37.951235 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.195:9311/healthcheck\": EOF" Oct 02 18:42:38 crc kubenswrapper[4832]: I1002 18:42:38.520154 4832 generic.go:334] "Generic (PLEG): container finished" podID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerID="5c0554b1333725dcd8e21b5886491998ee0189fac9bbd11406c4e5cc5fec2ff3" exitCode=143 Oct 02 18:42:38 crc kubenswrapper[4832]: I1002 18:42:38.520264 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" event={"ID":"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8","Type":"ContainerDied","Data":"5c0554b1333725dcd8e21b5886491998ee0189fac9bbd11406c4e5cc5fec2ff3"} Oct 02 18:42:38 crc kubenswrapper[4832]: I1002 18:42:38.520578 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:39 crc kubenswrapper[4832]: I1002 18:42:39.186461 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:42:39 crc kubenswrapper[4832]: I1002 18:42:39.258872 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-b94z6"] Oct 02 18:42:39 crc kubenswrapper[4832]: I1002 18:42:39.259101 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" podUID="9c4ee378-c41a-4461-91ce-8de208177861" containerName="dnsmasq-dns" containerID="cri-o://91fb91a489427342718fc0e12aa9a981fafe0ace647fa95011bf4ba4f07c63d9" gracePeriod=10 Oct 02 18:42:39 crc kubenswrapper[4832]: I1002 18:42:39.537053 4832 generic.go:334] "Generic (PLEG): container finished" podID="9c4ee378-c41a-4461-91ce-8de208177861" containerID="91fb91a489427342718fc0e12aa9a981fafe0ace647fa95011bf4ba4f07c63d9" exitCode=0 Oct 02 18:42:39 crc kubenswrapper[4832]: I1002 18:42:39.537153 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 18:42:39 crc kubenswrapper[4832]: I1002 18:42:39.537573 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" event={"ID":"9c4ee378-c41a-4461-91ce-8de208177861","Type":"ContainerDied","Data":"91fb91a489427342718fc0e12aa9a981fafe0ace647fa95011bf4ba4f07c63d9"} Oct 02 18:42:41 crc kubenswrapper[4832]: I1002 18:42:41.130524 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" podUID="9c4ee378-c41a-4461-91ce-8de208177861" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.185:5353: connect: connection refused" Oct 02 18:42:41 crc kubenswrapper[4832]: I1002 18:42:41.565555 4832 generic.go:334] "Generic (PLEG): container finished" podID="c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4" containerID="698459ce9def4a3003acfbd3a6740f3df798590fe26c13dbfbc0b0d862ce0d61" exitCode=0 Oct 02 18:42:41 crc kubenswrapper[4832]: I1002 18:42:41.565637 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dx58r" event={"ID":"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4","Type":"ContainerDied","Data":"698459ce9def4a3003acfbd3a6740f3df798590fe26c13dbfbc0b0d862ce0d61"} Oct 02 18:42:42 crc kubenswrapper[4832]: I1002 18:42:42.997003 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 02 18:42:42 crc kubenswrapper[4832]: I1002 18:42:42.997308 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 18:42:43 crc kubenswrapper[4832]: I1002 18:42:43.017880 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 02 18:42:43 crc kubenswrapper[4832]: I1002 18:42:43.020275 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:43 crc kubenswrapper[4832]: I1002 18:42:43.020351 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 18:42:43 crc kubenswrapper[4832]: I1002 18:42:43.026820 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 02 18:42:43 crc kubenswrapper[4832]: I1002 18:42:43.372748 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-54ddcb9945-p7pkt" Oct 02 18:42:43 crc kubenswrapper[4832]: I1002 18:42:43.419890 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.195:9311/healthcheck\": read tcp 10.217.0.2:36266->10.217.0.195:9311: read: connection reset by peer" Oct 02 18:42:43 crc kubenswrapper[4832]: I1002 18:42:43.420108 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.195:9311/healthcheck\": read tcp 10.217.0.2:36276->10.217.0.195:9311: read: connection reset by peer" Oct 02 18:42:43 crc kubenswrapper[4832]: I1002 18:42:43.603799 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.195:9311/healthcheck\": dial tcp 10.217.0.195:9311: connect: connection refused" Oct 02 18:42:43 crc kubenswrapper[4832]: I1002 18:42:43.603817 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.195:9311/healthcheck\": dial tcp 10.217.0.195:9311: connect: connection refused" Oct 02 18:42:43 crc kubenswrapper[4832]: I1002 18:42:43.617904 4832 generic.go:334] "Generic (PLEG): container finished" podID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerID="b2ffb8e3cf2d04903ad74d7ee138f5e6226094a86545b21b38263e742c409de7" exitCode=0 Oct 02 18:42:43 crc kubenswrapper[4832]: I1002 18:42:43.620444 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" event={"ID":"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8","Type":"ContainerDied","Data":"b2ffb8e3cf2d04903ad74d7ee138f5e6226094a86545b21b38263e742c409de7"} Oct 02 18:42:43 crc kubenswrapper[4832]: I1002 18:42:43.973103 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.012741 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7975695b86-g5x7n" Oct 02 18:42:44 crc kubenswrapper[4832]: E1002 18:42:44.087461 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Oct 02 18:42:44 crc kubenswrapper[4832]: E1002 18:42:44.087819 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x64sv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(c9f3139b-0c15-4734-8b9c-d753cf1f2cb5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 02 18:42:44 crc kubenswrapper[4832]: E1002 18:42:44.089155 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.252169 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dx58r" Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.365733 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9drd\" (UniqueName: \"kubernetes.io/projected/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-kube-api-access-b9drd\") pod \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\" (UID: \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\") " Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.365786 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-config-data\") pod \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\" (UID: \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\") " Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.365835 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-combined-ca-bundle\") pod \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\" (UID: \"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4\") " Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.379424 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-kube-api-access-b9drd" (OuterVolumeSpecName: "kube-api-access-b9drd") pod "c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4" (UID: "c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4"). InnerVolumeSpecName "kube-api-access-b9drd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.435127 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4" (UID: "c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.480713 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9drd\" (UniqueName: \"kubernetes.io/projected/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-kube-api-access-b9drd\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.480735 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.539408 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-config-data" (OuterVolumeSpecName: "config-data") pod "c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4" (UID: "c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.583678 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.662502 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-dx58r" Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.662588 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-dx58r" event={"ID":"c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4","Type":"ContainerDied","Data":"7363c923cc38a5f4279c0ae84900b4c94ee05528ce20d31760878e4e43b8cca4"} Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.662766 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7363c923cc38a5f4279c0ae84900b4c94ee05528ce20d31760878e4e43b8cca4" Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.663359 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" containerName="ceilometer-notification-agent" containerID="cri-o://de6d71fd90e233d95c8cb7f239d8e2f73d0800a7aac63f8faad9a3e965ce52c3" gracePeriod=30 Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.664473 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" containerName="sg-core" containerID="cri-o://5b303e33f461930cec9b27ece469261568837d96c9c89c1fe1e6cd6a5aa3b788" gracePeriod=30 Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.946135 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:44 crc kubenswrapper[4832]: I1002 18:42:44.950863 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.113970 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-ovsdbserver-sb\") pod \"9c4ee378-c41a-4461-91ce-8de208177861\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.114243 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jstrh\" (UniqueName: \"kubernetes.io/projected/9c4ee378-c41a-4461-91ce-8de208177861-kube-api-access-jstrh\") pod \"9c4ee378-c41a-4461-91ce-8de208177861\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.114306 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-dns-swift-storage-0\") pod \"9c4ee378-c41a-4461-91ce-8de208177861\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.114369 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-config-data-custom\") pod \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.114435 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-combined-ca-bundle\") pod \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.114503 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-config\") pod \"9c4ee378-c41a-4461-91ce-8de208177861\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.114529 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-dns-svc\") pod \"9c4ee378-c41a-4461-91ce-8de208177861\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.114582 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-logs\") pod \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.114615 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-ovsdbserver-nb\") pod \"9c4ee378-c41a-4461-91ce-8de208177861\" (UID: \"9c4ee378-c41a-4461-91ce-8de208177861\") " Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.114662 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-458cq\" (UniqueName: \"kubernetes.io/projected/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-kube-api-access-458cq\") pod \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.114692 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-config-data\") pod \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\" (UID: \"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8\") " Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.125664 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-logs" (OuterVolumeSpecName: "logs") pod "9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" (UID: "9e0bd067-b948-4aee-8ff7-fb5dddafcbf8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.126656 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c4ee378-c41a-4461-91ce-8de208177861-kube-api-access-jstrh" (OuterVolumeSpecName: "kube-api-access-jstrh") pod "9c4ee378-c41a-4461-91ce-8de208177861" (UID: "9c4ee378-c41a-4461-91ce-8de208177861"). InnerVolumeSpecName "kube-api-access-jstrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.180440 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-kube-api-access-458cq" (OuterVolumeSpecName: "kube-api-access-458cq") pod "9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" (UID: "9e0bd067-b948-4aee-8ff7-fb5dddafcbf8"). InnerVolumeSpecName "kube-api-access-458cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.218915 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-458cq\" (UniqueName: \"kubernetes.io/projected/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-kube-api-access-458cq\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.218947 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jstrh\" (UniqueName: \"kubernetes.io/projected/9c4ee378-c41a-4461-91ce-8de208177861-kube-api-access-jstrh\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.218957 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-logs\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.219335 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" (UID: "9e0bd067-b948-4aee-8ff7-fb5dddafcbf8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.320374 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.350670 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-config" (OuterVolumeSpecName: "config") pod "9c4ee378-c41a-4461-91ce-8de208177861" (UID: "9c4ee378-c41a-4461-91ce-8de208177861"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.356441 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" (UID: "9e0bd067-b948-4aee-8ff7-fb5dddafcbf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.396786 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9c4ee378-c41a-4461-91ce-8de208177861" (UID: "9c4ee378-c41a-4461-91ce-8de208177861"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.406204 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-config-data" (OuterVolumeSpecName: "config-data") pod "9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" (UID: "9e0bd067-b948-4aee-8ff7-fb5dddafcbf8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.424875 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.424910 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.424922 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.424931 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.440799 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9c4ee378-c41a-4461-91ce-8de208177861" (UID: "9c4ee378-c41a-4461-91ce-8de208177861"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.454293 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68769b5c9-9g8wt"] Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.457447 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9c4ee378-c41a-4461-91ce-8de208177861" (UID: "9c4ee378-c41a-4461-91ce-8de208177861"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.465995 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9c4ee378-c41a-4461-91ce-8de208177861" (UID: "9c4ee378-c41a-4461-91ce-8de208177861"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.527373 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.527410 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.527421 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c4ee378-c41a-4461-91ce-8de208177861-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.707172 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" event={"ID":"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9","Type":"ContainerStarted","Data":"f203388d83941f3adbc3f92752a8fe6b5888b8c1b4cdd31e6e3c09351ca9d96f"} Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.707524 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" event={"ID":"32db7ef2-6bb9-4834-9c9d-3bb13309b0e9","Type":"ContainerStarted","Data":"1950703f4ef035b59821883ce49908541ed0d999cb4061713ef398867438af36"} Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.710915 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68769b5c9-9g8wt" event={"ID":"eba53986-08b2-4e79-b3d9-85367ff7d816","Type":"ContainerStarted","Data":"68a16d44bce2a838b81146da6c86fd0406afd7330f14ff21cbcf20e37d887ba5"} Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.710955 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68769b5c9-9g8wt" event={"ID":"eba53986-08b2-4e79-b3d9-85367ff7d816","Type":"ContainerStarted","Data":"6e5ae6855ea42517abfa364e1b8a6ac5e017dafb46a2e718a32ffc36932e204e"} Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.719134 4832 generic.go:334] "Generic (PLEG): container finished" podID="c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" containerID="5b303e33f461930cec9b27ece469261568837d96c9c89c1fe1e6cd6a5aa3b788" exitCode=2 Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.719228 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5","Type":"ContainerDied","Data":"5b303e33f461930cec9b27ece469261568837d96c9c89c1fe1e6cd6a5aa3b788"} Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.725484 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" event={"ID":"9c4ee378-c41a-4461-91ce-8de208177861","Type":"ContainerDied","Data":"5e547f5a24aa928db87e51555c2a37413dc9c9d135b8296dbc50ace8141954fc"} Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.725497 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-b94z6" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.725555 4832 scope.go:117] "RemoveContainer" containerID="91fb91a489427342718fc0e12aa9a981fafe0ace647fa95011bf4ba4f07c63d9" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.731557 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-855fbd5c98-k2t4b" podStartSLOduration=14.031549856 podStartE2EDuration="28.731539699s" podCreationTimestamp="2025-10-02 18:42:17 +0000 UTC" firstStartedPulling="2025-10-02 18:42:29.447756489 +0000 UTC m=+1306.417199361" lastFinishedPulling="2025-10-02 18:42:44.147746332 +0000 UTC m=+1321.117189204" observedRunningTime="2025-10-02 18:42:45.723182982 +0000 UTC m=+1322.692625854" watchObservedRunningTime="2025-10-02 18:42:45.731539699 +0000 UTC m=+1322.700982571" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.744931 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd9c9bb87-rf75p" event={"ID":"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c","Type":"ContainerStarted","Data":"c46e89371bdbb72ae003d6cf896c1c873f362b4a0bce0fe9e54c8cf973d1a097"} Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.745070 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5fd9c9bb87-rf75p" event={"ID":"f944fb96-3cf4-42b3-b5b8-3da8dc107d7c","Type":"ContainerStarted","Data":"4f35f8d38968ef66ab02ec9c63a7d3673713de0c2cf0ce050eddba30e4eb7911"} Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.769780 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" event={"ID":"9e0bd067-b948-4aee-8ff7-fb5dddafcbf8","Type":"ContainerDied","Data":"91512546da4e389809a49c2f1f61e91fbf4b57f7d873f768bea98cca62656b61"} Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.769879 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7cc9b5f5c6-zlcjl" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.791682 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5fd9c9bb87-rf75p" podStartSLOduration=17.29128947 podStartE2EDuration="28.791660701s" podCreationTimestamp="2025-10-02 18:42:17 +0000 UTC" firstStartedPulling="2025-10-02 18:42:29.415197016 +0000 UTC m=+1306.384639888" lastFinishedPulling="2025-10-02 18:42:40.915568247 +0000 UTC m=+1317.885011119" observedRunningTime="2025-10-02 18:42:45.768784556 +0000 UTC m=+1322.738227438" watchObservedRunningTime="2025-10-02 18:42:45.791660701 +0000 UTC m=+1322.761103573" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.824814 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-b94z6"] Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.827647 4832 scope.go:117] "RemoveContainer" containerID="578ab805e530863ed65b5a7dbff7286041745f433af17d0d2655c407bb4f694b" Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.860726 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-b94z6"] Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.889394 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7cc9b5f5c6-zlcjl"] Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.904787 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7cc9b5f5c6-zlcjl"] Oct 02 18:42:45 crc kubenswrapper[4832]: I1002 18:42:45.961053 4832 scope.go:117] "RemoveContainer" containerID="b2ffb8e3cf2d04903ad74d7ee138f5e6226094a86545b21b38263e742c409de7" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.022066 4832 scope.go:117] "RemoveContainer" containerID="5c0554b1333725dcd8e21b5886491998ee0189fac9bbd11406c4e5cc5fec2ff3" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.672478 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 02 18:42:46 crc kubenswrapper[4832]: E1002 18:42:46.673097 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.673109 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api" Oct 02 18:42:46 crc kubenswrapper[4832]: E1002 18:42:46.673126 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4ee378-c41a-4461-91ce-8de208177861" containerName="dnsmasq-dns" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.673133 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4ee378-c41a-4461-91ce-8de208177861" containerName="dnsmasq-dns" Oct 02 18:42:46 crc kubenswrapper[4832]: E1002 18:42:46.673144 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4" containerName="heat-db-sync" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.673150 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4" containerName="heat-db-sync" Oct 02 18:42:46 crc kubenswrapper[4832]: E1002 18:42:46.673158 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api-log" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.673164 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api-log" Oct 02 18:42:46 crc kubenswrapper[4832]: E1002 18:42:46.673191 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4ee378-c41a-4461-91ce-8de208177861" containerName="init" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.673197 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4ee378-c41a-4461-91ce-8de208177861" containerName="init" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.673404 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.673425 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4" containerName="heat-db-sync" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.673438 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c4ee378-c41a-4461-91ce-8de208177861" containerName="dnsmasq-dns" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.673454 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" containerName="barbican-api-log" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.674157 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.677812 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-pn4gl" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.678052 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.678242 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.690596 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.702175 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ec4cba1f-e0b4-4901-add4-513dc675408e-openstack-config\") pod \"openstackclient\" (UID: \"ec4cba1f-e0b4-4901-add4-513dc675408e\") " pod="openstack/openstackclient" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.702369 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4cba1f-e0b4-4901-add4-513dc675408e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ec4cba1f-e0b4-4901-add4-513dc675408e\") " pod="openstack/openstackclient" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.702410 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ec4cba1f-e0b4-4901-add4-513dc675408e-openstack-config-secret\") pod \"openstackclient\" (UID: \"ec4cba1f-e0b4-4901-add4-513dc675408e\") " pod="openstack/openstackclient" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.702451 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66gvz\" (UniqueName: \"kubernetes.io/projected/ec4cba1f-e0b4-4901-add4-513dc675408e-kube-api-access-66gvz\") pod \"openstackclient\" (UID: \"ec4cba1f-e0b4-4901-add4-513dc675408e\") " pod="openstack/openstackclient" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.783588 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5gtbq" event={"ID":"f03462b3-a4a5-441c-93c5-1f0008d95f21","Type":"ContainerStarted","Data":"407d7211cfa1d4972189e68fb48ce36b33b39b30725ce227d0a718c9f57bd8c3"} Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.790011 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68769b5c9-9g8wt" event={"ID":"eba53986-08b2-4e79-b3d9-85367ff7d816","Type":"ContainerStarted","Data":"7bbcb2c03cf74c8531f60abc38c64bc145e77461112ef2874c87ca9909a9a199"} Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.790048 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.806935 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ec4cba1f-e0b4-4901-add4-513dc675408e-openstack-config\") pod \"openstackclient\" (UID: \"ec4cba1f-e0b4-4901-add4-513dc675408e\") " pod="openstack/openstackclient" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.807179 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4cba1f-e0b4-4901-add4-513dc675408e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ec4cba1f-e0b4-4901-add4-513dc675408e\") " pod="openstack/openstackclient" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.807245 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ec4cba1f-e0b4-4901-add4-513dc675408e-openstack-config-secret\") pod \"openstackclient\" (UID: \"ec4cba1f-e0b4-4901-add4-513dc675408e\") " pod="openstack/openstackclient" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.807300 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66gvz\" (UniqueName: \"kubernetes.io/projected/ec4cba1f-e0b4-4901-add4-513dc675408e-kube-api-access-66gvz\") pod \"openstackclient\" (UID: \"ec4cba1f-e0b4-4901-add4-513dc675408e\") " pod="openstack/openstackclient" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.808469 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ec4cba1f-e0b4-4901-add4-513dc675408e-openstack-config\") pod \"openstackclient\" (UID: \"ec4cba1f-e0b4-4901-add4-513dc675408e\") " pod="openstack/openstackclient" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.833963 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-68769b5c9-9g8wt" podStartSLOduration=15.833949007 podStartE2EDuration="15.833949007s" podCreationTimestamp="2025-10-02 18:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:46.831458641 +0000 UTC m=+1323.800901513" watchObservedRunningTime="2025-10-02 18:42:46.833949007 +0000 UTC m=+1323.803391879" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.834224 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ec4cba1f-e0b4-4901-add4-513dc675408e-openstack-config-secret\") pod \"openstackclient\" (UID: \"ec4cba1f-e0b4-4901-add4-513dc675408e\") " pod="openstack/openstackclient" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.835820 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4cba1f-e0b4-4901-add4-513dc675408e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ec4cba1f-e0b4-4901-add4-513dc675408e\") " pod="openstack/openstackclient" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.853503 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-5gtbq" podStartSLOduration=9.123570767 podStartE2EDuration="52.85347901s" podCreationTimestamp="2025-10-02 18:41:54 +0000 UTC" firstStartedPulling="2025-10-02 18:42:00.739834467 +0000 UTC m=+1277.709277339" lastFinishedPulling="2025-10-02 18:42:44.46974271 +0000 UTC m=+1321.439185582" observedRunningTime="2025-10-02 18:42:46.805851382 +0000 UTC m=+1323.775294244" watchObservedRunningTime="2025-10-02 18:42:46.85347901 +0000 UTC m=+1323.822921882" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.860894 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66gvz\" (UniqueName: \"kubernetes.io/projected/ec4cba1f-e0b4-4901-add4-513dc675408e-kube-api-access-66gvz\") pod \"openstackclient\" (UID: \"ec4cba1f-e0b4-4901-add4-513dc675408e\") " pod="openstack/openstackclient" Oct 02 18:42:46 crc kubenswrapper[4832]: I1002 18:42:46.995938 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 02 18:42:47 crc kubenswrapper[4832]: I1002 18:42:47.257124 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c4ee378-c41a-4461-91ce-8de208177861" path="/var/lib/kubelet/pods/9c4ee378-c41a-4461-91ce-8de208177861/volumes" Oct 02 18:42:47 crc kubenswrapper[4832]: I1002 18:42:47.258120 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e0bd067-b948-4aee-8ff7-fb5dddafcbf8" path="/var/lib/kubelet/pods/9e0bd067-b948-4aee-8ff7-fb5dddafcbf8/volumes" Oct 02 18:42:47 crc kubenswrapper[4832]: I1002 18:42:47.504826 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 02 18:42:47 crc kubenswrapper[4832]: I1002 18:42:47.799900 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ec4cba1f-e0b4-4901-add4-513dc675408e","Type":"ContainerStarted","Data":"314da3ec5d3a62b46e31cfefa86f1103cccbacc5d9d905b9e3c60375c0fc07fd"} Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.182974 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-64677dc65c-wh4zf"] Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.226207 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-64677dc65c-wh4zf"] Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.229165 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.247288 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-74z78" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.249646 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.257330 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.280567 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-config-data\") pod \"heat-engine-64677dc65c-wh4zf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.280638 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-config-data-custom\") pod \"heat-engine-64677dc65c-wh4zf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.280818 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq76d\" (UniqueName: \"kubernetes.io/projected/2132fc2a-d11e-473a-b4ab-15c56ac5debf-kube-api-access-hq76d\") pod \"heat-engine-64677dc65c-wh4zf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.280845 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-combined-ca-bundle\") pod \"heat-engine-64677dc65c-wh4zf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.382964 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq76d\" (UniqueName: \"kubernetes.io/projected/2132fc2a-d11e-473a-b4ab-15c56ac5debf-kube-api-access-hq76d\") pod \"heat-engine-64677dc65c-wh4zf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.383223 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-combined-ca-bundle\") pod \"heat-engine-64677dc65c-wh4zf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.383475 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-config-data\") pod \"heat-engine-64677dc65c-wh4zf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.383519 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-config-data-custom\") pod \"heat-engine-64677dc65c-wh4zf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.403821 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-config-data\") pod \"heat-engine-64677dc65c-wh4zf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.416185 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-combined-ca-bundle\") pod \"heat-engine-64677dc65c-wh4zf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.420972 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-config-data-custom\") pod \"heat-engine-64677dc65c-wh4zf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.452473 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq76d\" (UniqueName: \"kubernetes.io/projected/2132fc2a-d11e-473a-b4ab-15c56ac5debf-kube-api-access-hq76d\") pod \"heat-engine-64677dc65c-wh4zf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.461333 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76ff85fb9f-z7h2h"] Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.464205 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.499329 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76ff85fb9f-z7h2h"] Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.547824 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5bbc7df46d-j8ftx"] Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.549257 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.554809 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.591181 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-564f8674dd-8flcg"] Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.597911 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9www5\" (UniqueName: \"kubernetes.io/projected/c8a1ef0f-3cab-47b0-a020-e47fa685335f-kube-api-access-9www5\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.597992 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-config-data-custom\") pod \"heat-cfnapi-5bbc7df46d-j8ftx\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.598026 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjgrq\" (UniqueName: \"kubernetes.io/projected/f304a9e4-4a4b-4772-89f1-180613213911-kube-api-access-pjgrq\") pod \"heat-cfnapi-5bbc7df46d-j8ftx\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.598049 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-ovsdbserver-nb\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.598186 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-dns-svc\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.598210 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-ovsdbserver-sb\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.598284 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-config\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.598345 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-dns-swift-storage-0\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.598370 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-config-data\") pod \"heat-cfnapi-5bbc7df46d-j8ftx\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.598395 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-combined-ca-bundle\") pod \"heat-cfnapi-5bbc7df46d-j8ftx\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.601931 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.611673 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.637419 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5bbc7df46d-j8ftx"] Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.645786 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-564f8674dd-8flcg"] Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.655774 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.700746 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-dns-swift-storage-0\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.700790 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-config-data\") pod \"heat-cfnapi-5bbc7df46d-j8ftx\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.700814 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-combined-ca-bundle\") pod \"heat-cfnapi-5bbc7df46d-j8ftx\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.700862 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9www5\" (UniqueName: \"kubernetes.io/projected/c8a1ef0f-3cab-47b0-a020-e47fa685335f-kube-api-access-9www5\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.700882 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-config-data\") pod \"heat-api-564f8674dd-8flcg\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.700926 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-config-data-custom\") pod \"heat-cfnapi-5bbc7df46d-j8ftx\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.700951 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjgrq\" (UniqueName: \"kubernetes.io/projected/f304a9e4-4a4b-4772-89f1-180613213911-kube-api-access-pjgrq\") pod \"heat-cfnapi-5bbc7df46d-j8ftx\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.700967 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-ovsdbserver-nb\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.700989 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68rtw\" (UniqueName: \"kubernetes.io/projected/8b30871f-113e-4bce-a095-64873f95939b-kube-api-access-68rtw\") pod \"heat-api-564f8674dd-8flcg\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.701041 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-config-data-custom\") pod \"heat-api-564f8674dd-8flcg\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.701072 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-combined-ca-bundle\") pod \"heat-api-564f8674dd-8flcg\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.701116 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-dns-svc\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.701130 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-ovsdbserver-sb\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.701172 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-config\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.701808 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-dns-swift-storage-0\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.701864 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-config\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.702380 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-ovsdbserver-nb\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.728319 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-combined-ca-bundle\") pod \"heat-cfnapi-5bbc7df46d-j8ftx\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.732049 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-config-data-custom\") pod \"heat-cfnapi-5bbc7df46d-j8ftx\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.735193 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-config-data\") pod \"heat-cfnapi-5bbc7df46d-j8ftx\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.736779 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-dns-svc\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.737233 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-ovsdbserver-sb\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.742432 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9www5\" (UniqueName: \"kubernetes.io/projected/c8a1ef0f-3cab-47b0-a020-e47fa685335f-kube-api-access-9www5\") pod \"dnsmasq-dns-76ff85fb9f-z7h2h\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.777529 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjgrq\" (UniqueName: \"kubernetes.io/projected/f304a9e4-4a4b-4772-89f1-180613213911-kube-api-access-pjgrq\") pod \"heat-cfnapi-5bbc7df46d-j8ftx\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.802577 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-config-data-custom\") pod \"heat-api-564f8674dd-8flcg\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.802640 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-combined-ca-bundle\") pod \"heat-api-564f8674dd-8flcg\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.802916 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-config-data\") pod \"heat-api-564f8674dd-8flcg\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.802976 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68rtw\" (UniqueName: \"kubernetes.io/projected/8b30871f-113e-4bce-a095-64873f95939b-kube-api-access-68rtw\") pod \"heat-api-564f8674dd-8flcg\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.808380 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-config-data\") pod \"heat-api-564f8674dd-8flcg\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.812117 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-config-data-custom\") pod \"heat-api-564f8674dd-8flcg\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.829200 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68rtw\" (UniqueName: \"kubernetes.io/projected/8b30871f-113e-4bce-a095-64873f95939b-kube-api-access-68rtw\") pod \"heat-api-564f8674dd-8flcg\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.841012 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-combined-ca-bundle\") pod \"heat-api-564f8674dd-8flcg\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.883088 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.897369 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:49 crc kubenswrapper[4832]: I1002 18:42:49.966530 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.681954 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-64677dc65c-wh4zf"] Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.868319 4832 generic.go:334] "Generic (PLEG): container finished" podID="c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" containerID="de6d71fd90e233d95c8cb7f239d8e2f73d0800a7aac63f8faad9a3e965ce52c3" exitCode=0 Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.868581 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5","Type":"ContainerDied","Data":"de6d71fd90e233d95c8cb7f239d8e2f73d0800a7aac63f8faad9a3e965ce52c3"} Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.871332 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-64677dc65c-wh4zf" event={"ID":"2132fc2a-d11e-473a-b4ab-15c56ac5debf","Type":"ContainerStarted","Data":"934a3fb9fc592f1f05a1f0787f6260284c7be675f431e558e89bef8055ddab0b"} Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.906482 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.938114 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-scripts\") pod \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.938347 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-config-data\") pod \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.938562 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-run-httpd\") pod \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.938682 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-sg-core-conf-yaml\") pod \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.938836 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-combined-ca-bundle\") pod \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.939002 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-log-httpd\") pod \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.939077 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" (UID: "c9f3139b-0c15-4734-8b9c-d753cf1f2cb5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.939203 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x64sv\" (UniqueName: \"kubernetes.io/projected/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-kube-api-access-x64sv\") pod \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\" (UID: \"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5\") " Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.939480 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" (UID: "c9f3139b-0c15-4734-8b9c-d753cf1f2cb5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.939817 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.939880 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.947919 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-scripts" (OuterVolumeSpecName: "scripts") pod "c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" (UID: "c9f3139b-0c15-4734-8b9c-d753cf1f2cb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:50 crc kubenswrapper[4832]: I1002 18:42:50.986910 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-kube-api-access-x64sv" (OuterVolumeSpecName: "kube-api-access-x64sv") pod "c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" (UID: "c9f3139b-0c15-4734-8b9c-d753cf1f2cb5"). InnerVolumeSpecName "kube-api-access-x64sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.028101 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-config-data" (OuterVolumeSpecName: "config-data") pod "c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" (UID: "c9f3139b-0c15-4734-8b9c-d753cf1f2cb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.041945 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x64sv\" (UniqueName: \"kubernetes.io/projected/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-kube-api-access-x64sv\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.041985 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.041998 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.078813 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" (UID: "c9f3139b-0c15-4734-8b9c-d753cf1f2cb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.106115 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" (UID: "c9f3139b-0c15-4734-8b9c-d753cf1f2cb5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.144386 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.144429 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.365670 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5bbc7df46d-j8ftx"] Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.385880 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-564f8674dd-8flcg"] Oct 02 18:42:51 crc kubenswrapper[4832]: W1002 18:42:51.391058 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b30871f_113e_4bce_a095_64873f95939b.slice/crio-edb7490f91ad65fd86c6caa6abd8f84d2f939fd9f27d4ccc0fa12c2def6711da WatchSource:0}: Error finding container edb7490f91ad65fd86c6caa6abd8f84d2f939fd9f27d4ccc0fa12c2def6711da: Status 404 returned error can't find the container with id edb7490f91ad65fd86c6caa6abd8f84d2f939fd9f27d4ccc0fa12c2def6711da Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.412514 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76ff85fb9f-z7h2h"] Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.907755 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9f3139b-0c15-4734-8b9c-d753cf1f2cb5","Type":"ContainerDied","Data":"7a0b2ab57d13c52b853c18c78b350c77e0cc861f3371ce6bb285bc82760a03c2"} Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.908025 4832 scope.go:117] "RemoveContainer" containerID="5b303e33f461930cec9b27ece469261568837d96c9c89c1fe1e6cd6a5aa3b788" Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.908173 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.916527 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" event={"ID":"c8a1ef0f-3cab-47b0-a020-e47fa685335f","Type":"ContainerStarted","Data":"a13dc534b2f72fba7fd6664b99d365597a0460dd21ccfd5f5a358d945e858177"} Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.916572 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" event={"ID":"c8a1ef0f-3cab-47b0-a020-e47fa685335f","Type":"ContainerStarted","Data":"fbcf9a2c5873cf5cd7e3fe98f8eb745ed235c9014196084c5bfa325b5d40ad92"} Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.918418 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" event={"ID":"f304a9e4-4a4b-4772-89f1-180613213911","Type":"ContainerStarted","Data":"fe6a77f41541120348dcf0d389e19e71f72a35f525f10674bd9f785df5da13af"} Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.939618 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-564f8674dd-8flcg" event={"ID":"8b30871f-113e-4bce-a095-64873f95939b","Type":"ContainerStarted","Data":"edb7490f91ad65fd86c6caa6abd8f84d2f939fd9f27d4ccc0fa12c2def6711da"} Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.967448 4832 scope.go:117] "RemoveContainer" containerID="de6d71fd90e233d95c8cb7f239d8e2f73d0800a7aac63f8faad9a3e965ce52c3" Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.984710 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-64677dc65c-wh4zf" event={"ID":"2132fc2a-d11e-473a-b4ab-15c56ac5debf","Type":"ContainerStarted","Data":"2a484137a645562083e85d0f06ef2a80f24c332fe468822f79d6ad993e6e3d32"} Oct 02 18:42:51 crc kubenswrapper[4832]: I1002 18:42:51.985596 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.034056 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.045548 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.069999 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-64677dc65c-wh4zf" podStartSLOduration=3.06997839 podStartE2EDuration="3.06997839s" podCreationTimestamp="2025-10-02 18:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:52.039240043 +0000 UTC m=+1329.008682915" watchObservedRunningTime="2025-10-02 18:42:52.06997839 +0000 UTC m=+1329.039421262" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.098875 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:42:52 crc kubenswrapper[4832]: E1002 18:42:52.099365 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" containerName="sg-core" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.099382 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" containerName="sg-core" Oct 02 18:42:52 crc kubenswrapper[4832]: E1002 18:42:52.099418 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" containerName="ceilometer-notification-agent" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.099424 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" containerName="ceilometer-notification-agent" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.099773 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" containerName="ceilometer-notification-agent" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.099790 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" containerName="sg-core" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.102089 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.105671 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.105905 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.108320 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.235155 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-log-httpd\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.235327 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-scripts\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.235348 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.235378 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk276\" (UniqueName: \"kubernetes.io/projected/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-kube-api-access-mk276\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.235397 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.235554 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-config-data\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.235666 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-run-httpd\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.337306 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-scripts\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.337369 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.337410 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk276\" (UniqueName: \"kubernetes.io/projected/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-kube-api-access-mk276\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.337439 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.337508 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-config-data\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.337541 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-run-httpd\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.337587 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-log-httpd\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.338798 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-run-httpd\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.340161 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-log-httpd\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.343256 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.343931 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-scripts\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.345881 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.349336 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-config-data\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.355042 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk276\" (UniqueName: \"kubernetes.io/projected/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-kube-api-access-mk276\") pod \"ceilometer-0\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " pod="openstack/ceilometer-0" Oct 02 18:42:52 crc kubenswrapper[4832]: I1002 18:42:52.468232 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.008782 4832 generic.go:334] "Generic (PLEG): container finished" podID="c8a1ef0f-3cab-47b0-a020-e47fa685335f" containerID="a13dc534b2f72fba7fd6664b99d365597a0460dd21ccfd5f5a358d945e858177" exitCode=0 Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.010333 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" event={"ID":"c8a1ef0f-3cab-47b0-a020-e47fa685335f","Type":"ContainerDied","Data":"a13dc534b2f72fba7fd6664b99d365597a0460dd21ccfd5f5a358d945e858177"} Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.129330 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.238722 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9f3139b-0c15-4734-8b9c-d753cf1f2cb5" path="/var/lib/kubelet/pods/c9f3139b-0c15-4734-8b9c-d753cf1f2cb5/volumes" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.661061 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5f6f67fd59-pbxsj"] Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.664311 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.668564 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.668724 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.668877 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.697096 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5f6f67fd59-pbxsj"] Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.797615 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffb41da-c1fe-465d-8ddc-9df65cc50a51-log-httpd\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.797696 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfc8s\" (UniqueName: \"kubernetes.io/projected/cffb41da-c1fe-465d-8ddc-9df65cc50a51-kube-api-access-hfc8s\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.797727 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffb41da-c1fe-465d-8ddc-9df65cc50a51-public-tls-certs\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.797759 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffb41da-c1fe-465d-8ddc-9df65cc50a51-config-data\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.797776 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cffb41da-c1fe-465d-8ddc-9df65cc50a51-etc-swift\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.797821 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffb41da-c1fe-465d-8ddc-9df65cc50a51-run-httpd\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.797856 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffb41da-c1fe-465d-8ddc-9df65cc50a51-combined-ca-bundle\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.798054 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffb41da-c1fe-465d-8ddc-9df65cc50a51-internal-tls-certs\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.899978 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffb41da-c1fe-465d-8ddc-9df65cc50a51-combined-ca-bundle\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.900055 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffb41da-c1fe-465d-8ddc-9df65cc50a51-internal-tls-certs\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.900125 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffb41da-c1fe-465d-8ddc-9df65cc50a51-log-httpd\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.900187 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfc8s\" (UniqueName: \"kubernetes.io/projected/cffb41da-c1fe-465d-8ddc-9df65cc50a51-kube-api-access-hfc8s\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.900215 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffb41da-c1fe-465d-8ddc-9df65cc50a51-public-tls-certs\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.900245 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffb41da-c1fe-465d-8ddc-9df65cc50a51-config-data\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.900278 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cffb41da-c1fe-465d-8ddc-9df65cc50a51-etc-swift\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.900325 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffb41da-c1fe-465d-8ddc-9df65cc50a51-run-httpd\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.900757 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffb41da-c1fe-465d-8ddc-9df65cc50a51-log-httpd\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.900805 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffb41da-c1fe-465d-8ddc-9df65cc50a51-run-httpd\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.910077 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffb41da-c1fe-465d-8ddc-9df65cc50a51-combined-ca-bundle\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.914134 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffb41da-c1fe-465d-8ddc-9df65cc50a51-config-data\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.914211 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cffb41da-c1fe-465d-8ddc-9df65cc50a51-etc-swift\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.914619 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffb41da-c1fe-465d-8ddc-9df65cc50a51-internal-tls-certs\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.918915 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfc8s\" (UniqueName: \"kubernetes.io/projected/cffb41da-c1fe-465d-8ddc-9df65cc50a51-kube-api-access-hfc8s\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.930310 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffb41da-c1fe-465d-8ddc-9df65cc50a51-public-tls-certs\") pod \"swift-proxy-5f6f67fd59-pbxsj\" (UID: \"cffb41da-c1fe-465d-8ddc-9df65cc50a51\") " pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:53 crc kubenswrapper[4832]: I1002 18:42:53.989106 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:54 crc kubenswrapper[4832]: I1002 18:42:54.024415 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" event={"ID":"c8a1ef0f-3cab-47b0-a020-e47fa685335f","Type":"ContainerStarted","Data":"7a6eeef9eabb7c054397649fb29f7706219b41b7d9577fdac1b5e155cf52de99"} Oct 02 18:42:54 crc kubenswrapper[4832]: I1002 18:42:54.025441 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:54 crc kubenswrapper[4832]: I1002 18:42:54.028711 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d","Type":"ContainerStarted","Data":"53622898bb4a458a26033464c7a6f9e13f33e2fb60820dd6d4e15ad90133abdf"} Oct 02 18:42:54 crc kubenswrapper[4832]: I1002 18:42:54.051940 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" podStartSLOduration=5.051922872 podStartE2EDuration="5.051922872s" podCreationTimestamp="2025-10-02 18:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:54.044672208 +0000 UTC m=+1331.014115070" watchObservedRunningTime="2025-10-02 18:42:54.051922872 +0000 UTC m=+1331.021365744" Oct 02 18:42:54 crc kubenswrapper[4832]: I1002 18:42:54.680031 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:42:55 crc kubenswrapper[4832]: I1002 18:42:55.039509 4832 generic.go:334] "Generic (PLEG): container finished" podID="f03462b3-a4a5-441c-93c5-1f0008d95f21" containerID="407d7211cfa1d4972189e68fb48ce36b33b39b30725ce227d0a718c9f57bd8c3" exitCode=0 Oct 02 18:42:55 crc kubenswrapper[4832]: I1002 18:42:55.039599 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5gtbq" event={"ID":"f03462b3-a4a5-441c-93c5-1f0008d95f21","Type":"ContainerDied","Data":"407d7211cfa1d4972189e68fb48ce36b33b39b30725ce227d0a718c9f57bd8c3"} Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.187029 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" event={"ID":"f304a9e4-4a4b-4772-89f1-180613213911","Type":"ContainerStarted","Data":"ab6a24107c79e78bd2f29a02ab7c1b5efe845c3b3e50e0523316cdf076cfaf82"} Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.187515 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.201696 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.248006 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" podStartSLOduration=3.050474119 podStartE2EDuration="7.24798033s" podCreationTimestamp="2025-10-02 18:42:49 +0000 UTC" firstStartedPulling="2025-10-02 18:42:51.366547071 +0000 UTC m=+1328.335989943" lastFinishedPulling="2025-10-02 18:42:55.564053282 +0000 UTC m=+1332.533496154" observedRunningTime="2025-10-02 18:42:56.221943888 +0000 UTC m=+1333.191386760" watchObservedRunningTime="2025-10-02 18:42:56.24798033 +0000 UTC m=+1333.217423202" Oct 02 18:42:56 crc kubenswrapper[4832]: W1002 18:42:56.317534 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcffb41da_c1fe_465d_8ddc_9df65cc50a51.slice/crio-eccb4338aba7321f1b2a2437da811967c70b01987d8d8520770851dbcd22c700 WatchSource:0}: Error finding container eccb4338aba7321f1b2a2437da811967c70b01987d8d8520770851dbcd22c700: Status 404 returned error can't find the container with id eccb4338aba7321f1b2a2437da811967c70b01987d8d8520770851dbcd22c700 Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.320483 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5f6f67fd59-pbxsj"] Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.329669 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-564f8674dd-8flcg" podStartSLOduration=3.160816718 podStartE2EDuration="7.329643825s" podCreationTimestamp="2025-10-02 18:42:49 +0000 UTC" firstStartedPulling="2025-10-02 18:42:51.393590104 +0000 UTC m=+1328.363032976" lastFinishedPulling="2025-10-02 18:42:55.562417211 +0000 UTC m=+1332.531860083" observedRunningTime="2025-10-02 18:42:56.254016666 +0000 UTC m=+1333.223459548" watchObservedRunningTime="2025-10-02 18:42:56.329643825 +0000 UTC m=+1333.299086707" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.371170 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-86bf6cf48b-jmwqc"] Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.372691 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.392145 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-86bf6cf48b-jmwqc"] Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.429445 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-68c567499b-m74m6"] Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.433465 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.440674 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7db5b7c86d-5r7nc"] Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.449030 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.465173 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-68c567499b-m74m6"] Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.479620 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7db5b7c86d-5r7nc"] Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.492930 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-combined-ca-bundle\") pod \"heat-cfnapi-7db5b7c86d-5r7nc\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.492972 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-config-data-custom\") pod \"heat-cfnapi-7db5b7c86d-5r7nc\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.492999 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-config-data-custom\") pod \"heat-engine-86bf6cf48b-jmwqc\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.493032 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfjls\" (UniqueName: \"kubernetes.io/projected/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-kube-api-access-gfjls\") pod \"heat-api-68c567499b-m74m6\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.493051 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-combined-ca-bundle\") pod \"heat-api-68c567499b-m74m6\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.493069 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvr6m\" (UniqueName: \"kubernetes.io/projected/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-kube-api-access-rvr6m\") pod \"heat-engine-86bf6cf48b-jmwqc\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.493088 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-combined-ca-bundle\") pod \"heat-engine-86bf6cf48b-jmwqc\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.493104 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-config-data\") pod \"heat-engine-86bf6cf48b-jmwqc\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.493155 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-config-data-custom\") pod \"heat-api-68c567499b-m74m6\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.493174 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-config-data\") pod \"heat-cfnapi-7db5b7c86d-5r7nc\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.493195 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k9gq\" (UniqueName: \"kubernetes.io/projected/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-kube-api-access-5k9gq\") pod \"heat-cfnapi-7db5b7c86d-5r7nc\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.493304 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-config-data\") pod \"heat-api-68c567499b-m74m6\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.598184 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-config-data\") pod \"heat-api-68c567499b-m74m6\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.603116 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-combined-ca-bundle\") pod \"heat-cfnapi-7db5b7c86d-5r7nc\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.603202 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-config-data-custom\") pod \"heat-cfnapi-7db5b7c86d-5r7nc\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.603250 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-config-data-custom\") pod \"heat-engine-86bf6cf48b-jmwqc\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.603579 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfjls\" (UniqueName: \"kubernetes.io/projected/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-kube-api-access-gfjls\") pod \"heat-api-68c567499b-m74m6\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.603634 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-combined-ca-bundle\") pod \"heat-api-68c567499b-m74m6\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.603670 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvr6m\" (UniqueName: \"kubernetes.io/projected/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-kube-api-access-rvr6m\") pod \"heat-engine-86bf6cf48b-jmwqc\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.603718 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-combined-ca-bundle\") pod \"heat-engine-86bf6cf48b-jmwqc\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.603757 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-config-data\") pod \"heat-engine-86bf6cf48b-jmwqc\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.603886 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-config-data-custom\") pod \"heat-api-68c567499b-m74m6\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.603918 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-config-data\") pod \"heat-cfnapi-7db5b7c86d-5r7nc\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.603967 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k9gq\" (UniqueName: \"kubernetes.io/projected/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-kube-api-access-5k9gq\") pod \"heat-cfnapi-7db5b7c86d-5r7nc\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.617385 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-config-data\") pod \"heat-api-68c567499b-m74m6\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.622636 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-config-data\") pod \"heat-engine-86bf6cf48b-jmwqc\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.624421 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-config-data-custom\") pod \"heat-api-68c567499b-m74m6\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.626333 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-combined-ca-bundle\") pod \"heat-cfnapi-7db5b7c86d-5r7nc\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.627108 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-combined-ca-bundle\") pod \"heat-engine-86bf6cf48b-jmwqc\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.631774 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-combined-ca-bundle\") pod \"heat-api-68c567499b-m74m6\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.637322 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-config-data\") pod \"heat-cfnapi-7db5b7c86d-5r7nc\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.641008 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfjls\" (UniqueName: \"kubernetes.io/projected/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-kube-api-access-gfjls\") pod \"heat-api-68c567499b-m74m6\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.642655 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-config-data-custom\") pod \"heat-cfnapi-7db5b7c86d-5r7nc\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.643054 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k9gq\" (UniqueName: \"kubernetes.io/projected/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-kube-api-access-5k9gq\") pod \"heat-cfnapi-7db5b7c86d-5r7nc\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.648820 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-config-data-custom\") pod \"heat-engine-86bf6cf48b-jmwqc\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.659182 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvr6m\" (UniqueName: \"kubernetes.io/projected/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-kube-api-access-rvr6m\") pod \"heat-engine-86bf6cf48b-jmwqc\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.742784 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.800011 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:56 crc kubenswrapper[4832]: I1002 18:42:56.905808 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.341080 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.343231 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5f6f67fd59-pbxsj" event={"ID":"cffb41da-c1fe-465d-8ddc-9df65cc50a51","Type":"ContainerStarted","Data":"920544ee0ae633cec960a3ea75c33d53b7aa7d52683ed32dd0fc368ba20d70aa"} Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.343316 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5f6f67fd59-pbxsj" event={"ID":"cffb41da-c1fe-465d-8ddc-9df65cc50a51","Type":"ContainerStarted","Data":"eccb4338aba7321f1b2a2437da811967c70b01987d8d8520770851dbcd22c700"} Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.348577 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d","Type":"ContainerStarted","Data":"a62e6f0326571bd65e097950ceb1aa99028ab8581aa995b72a2fdcd24fe13bdc"} Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.368352 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-564f8674dd-8flcg" event={"ID":"8b30871f-113e-4bce-a095-64873f95939b","Type":"ContainerStarted","Data":"d522588e8b3a8dfb8f2fff69499352daaf54884515e2a41b8510d9653e29139b"} Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.442166 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-combined-ca-bundle\") pod \"f03462b3-a4a5-441c-93c5-1f0008d95f21\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.442292 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-scripts\") pod \"f03462b3-a4a5-441c-93c5-1f0008d95f21\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.442430 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbrfs\" (UniqueName: \"kubernetes.io/projected/f03462b3-a4a5-441c-93c5-1f0008d95f21-kube-api-access-nbrfs\") pod \"f03462b3-a4a5-441c-93c5-1f0008d95f21\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.442451 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-db-sync-config-data\") pod \"f03462b3-a4a5-441c-93c5-1f0008d95f21\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.442472 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f03462b3-a4a5-441c-93c5-1f0008d95f21-etc-machine-id\") pod \"f03462b3-a4a5-441c-93c5-1f0008d95f21\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.442522 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-config-data\") pod \"f03462b3-a4a5-441c-93c5-1f0008d95f21\" (UID: \"f03462b3-a4a5-441c-93c5-1f0008d95f21\") " Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.445341 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f03462b3-a4a5-441c-93c5-1f0008d95f21-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f03462b3-a4a5-441c-93c5-1f0008d95f21" (UID: "f03462b3-a4a5-441c-93c5-1f0008d95f21"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.452165 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03462b3-a4a5-441c-93c5-1f0008d95f21-kube-api-access-nbrfs" (OuterVolumeSpecName: "kube-api-access-nbrfs") pod "f03462b3-a4a5-441c-93c5-1f0008d95f21" (UID: "f03462b3-a4a5-441c-93c5-1f0008d95f21"). InnerVolumeSpecName "kube-api-access-nbrfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.473758 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-scripts" (OuterVolumeSpecName: "scripts") pod "f03462b3-a4a5-441c-93c5-1f0008d95f21" (UID: "f03462b3-a4a5-441c-93c5-1f0008d95f21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.480384 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f03462b3-a4a5-441c-93c5-1f0008d95f21" (UID: "f03462b3-a4a5-441c-93c5-1f0008d95f21"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.544887 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.544917 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbrfs\" (UniqueName: \"kubernetes.io/projected/f03462b3-a4a5-441c-93c5-1f0008d95f21-kube-api-access-nbrfs\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.544930 4832 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.544937 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f03462b3-a4a5-441c-93c5-1f0008d95f21-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.568706 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f03462b3-a4a5-441c-93c5-1f0008d95f21" (UID: "f03462b3-a4a5-441c-93c5-1f0008d95f21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.582556 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-config-data" (OuterVolumeSpecName: "config-data") pod "f03462b3-a4a5-441c-93c5-1f0008d95f21" (UID: "f03462b3-a4a5-441c-93c5-1f0008d95f21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.649017 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.649056 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03462b3-a4a5-441c-93c5-1f0008d95f21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:42:57 crc kubenswrapper[4832]: I1002 18:42:57.924847 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-86bf6cf48b-jmwqc"] Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.079182 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7db5b7c86d-5r7nc"] Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.396249 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" event={"ID":"2e05ad7f-55e9-434c-9714-ab700b1ff7c1","Type":"ContainerStarted","Data":"ae47eb1d9fe6588e099a4ce3e8e3b1a385557ea7f3c2ce2eb8c99d418e3a65a3"} Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.407881 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5gtbq" event={"ID":"f03462b3-a4a5-441c-93c5-1f0008d95f21","Type":"ContainerDied","Data":"49430731befb9c95fc765b363d76a4c30149924a601436b78125173fdbf7d8fe"} Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.407920 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49430731befb9c95fc765b363d76a4c30149924a601436b78125173fdbf7d8fe" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.408005 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5gtbq" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.445082 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5f6f67fd59-pbxsj" event={"ID":"cffb41da-c1fe-465d-8ddc-9df65cc50a51","Type":"ContainerStarted","Data":"c3ddc9dc0bfe6c7fc3fb058760b121d9f0eb5232bf5575d4e39c53fbf18216c5"} Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.446884 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.447032 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.524501 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-68c567499b-m74m6"] Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.527669 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d","Type":"ContainerStarted","Data":"f55548eea6bb4a546c365d57e971114b76f0d5496a87dece782ab9620860d95e"} Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.533055 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5f6f67fd59-pbxsj" podStartSLOduration=5.533019229 podStartE2EDuration="5.533019229s" podCreationTimestamp="2025-10-02 18:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:58.513646012 +0000 UTC m=+1335.483088884" watchObservedRunningTime="2025-10-02 18:42:58.533019229 +0000 UTC m=+1335.502462101" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.536700 4832 generic.go:334] "Generic (PLEG): container finished" podID="8b30871f-113e-4bce-a095-64873f95939b" containerID="d522588e8b3a8dfb8f2fff69499352daaf54884515e2a41b8510d9653e29139b" exitCode=1 Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.536900 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-564f8674dd-8flcg" event={"ID":"8b30871f-113e-4bce-a095-64873f95939b","Type":"ContainerDied","Data":"d522588e8b3a8dfb8f2fff69499352daaf54884515e2a41b8510d9653e29139b"} Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.538440 4832 scope.go:117] "RemoveContainer" containerID="d522588e8b3a8dfb8f2fff69499352daaf54884515e2a41b8510d9653e29139b" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.540653 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-86bf6cf48b-jmwqc" event={"ID":"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5","Type":"ContainerStarted","Data":"9cc7f7984a9559af186b58e3acdaadd35ec7302ebdd6806e68de783f54d8de4c"} Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.545659 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.545685 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-86bf6cf48b-jmwqc" event={"ID":"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5","Type":"ContainerStarted","Data":"2c543c81c2051495a298a0c7e1b84240d0fb276b59f29d33d61da3fa10e5c318"} Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.645367 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-86bf6cf48b-jmwqc" podStartSLOduration=2.645346339 podStartE2EDuration="2.645346339s" podCreationTimestamp="2025-10-02 18:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:58.589615172 +0000 UTC m=+1335.559058044" watchObservedRunningTime="2025-10-02 18:42:58.645346339 +0000 UTC m=+1335.614789211" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.707866 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 18:42:58 crc kubenswrapper[4832]: E1002 18:42:58.708452 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03462b3-a4a5-441c-93c5-1f0008d95f21" containerName="cinder-db-sync" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.708470 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03462b3-a4a5-441c-93c5-1f0008d95f21" containerName="cinder-db-sync" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.708695 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03462b3-a4a5-441c-93c5-1f0008d95f21" containerName="cinder-db-sync" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.709932 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.722695 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.722948 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.723212 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.723222 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bk8lf" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.778748 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.891402 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76ff85fb9f-z7h2h"] Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.891713 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" podUID="c8a1ef0f-3cab-47b0-a020-e47fa685335f" containerName="dnsmasq-dns" containerID="cri-o://7a6eeef9eabb7c054397649fb29f7706219b41b7d9577fdac1b5e155cf52de99" gracePeriod=10 Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.898626 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-config-data\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.898719 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.898747 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szw2n\" (UniqueName: \"kubernetes.io/projected/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-kube-api-access-szw2n\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.898777 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-scripts\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.898797 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.898832 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.921422 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.936981 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-q287c"] Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.941015 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.958790 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-q287c"] Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.971772 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.973984 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.980228 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 02 18:42:58 crc kubenswrapper[4832]: I1002 18:42:58.982425 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.000718 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.000769 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szw2n\" (UniqueName: \"kubernetes.io/projected/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-kube-api-access-szw2n\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.000803 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-scripts\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.000824 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.000861 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.000947 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-config-data\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.007397 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.009939 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.027422 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.035764 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-scripts\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.038185 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-config-data\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.040065 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szw2n\" (UniqueName: \"kubernetes.io/projected/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-kube-api-access-szw2n\") pod \"cinder-scheduler-0\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " pod="openstack/cinder-scheduler-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.108610 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.108673 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-config-data-custom\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.108699 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab3b380b-509b-4012-bbec-74d1dd95c048-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.108721 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-scripts\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.108787 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95vfm\" (UniqueName: \"kubernetes.io/projected/ab3b380b-509b-4012-bbec-74d1dd95c048-kube-api-access-95vfm\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.108806 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3b380b-509b-4012-bbec-74d1dd95c048-logs\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.108858 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-config\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.108874 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.108908 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-config-data\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.108926 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.108971 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.108988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.109012 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdpcb\" (UniqueName: \"kubernetes.io/projected/ecd658b9-1c22-4778-afde-b392155b499a-kube-api-access-cdpcb\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.212137 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95vfm\" (UniqueName: \"kubernetes.io/projected/ab3b380b-509b-4012-bbec-74d1dd95c048-kube-api-access-95vfm\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.212193 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3b380b-509b-4012-bbec-74d1dd95c048-logs\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.212253 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-config\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.212286 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.212318 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-config-data\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.212335 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.212387 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.212407 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.212426 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdpcb\" (UniqueName: \"kubernetes.io/projected/ecd658b9-1c22-4778-afde-b392155b499a-kube-api-access-cdpcb\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.212475 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.212495 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-config-data-custom\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.212516 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab3b380b-509b-4012-bbec-74d1dd95c048-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.212535 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-scripts\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.213225 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3b380b-509b-4012-bbec-74d1dd95c048-logs\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.213964 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-config\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.214480 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.215127 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.216886 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-scripts\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.218127 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.218425 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab3b380b-509b-4012-bbec-74d1dd95c048-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.219608 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.225132 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-config-data-custom\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.228215 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-config-data\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.228862 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.234991 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.237956 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95vfm\" (UniqueName: \"kubernetes.io/projected/ab3b380b-509b-4012-bbec-74d1dd95c048-kube-api-access-95vfm\") pod \"cinder-api-0\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.238942 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdpcb\" (UniqueName: \"kubernetes.io/projected/ecd658b9-1c22-4778-afde-b392155b499a-kube-api-access-cdpcb\") pod \"dnsmasq-dns-7756b9d78c-q287c\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.312407 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.339862 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.450279 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.663195 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68c567499b-m74m6" event={"ID":"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54","Type":"ContainerStarted","Data":"5d0bf408441a01d04f83c9abfc5e8995059c10938833a954a86a84cd72127551"} Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.663496 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68c567499b-m74m6" event={"ID":"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54","Type":"ContainerStarted","Data":"a4f6c538014b077865f6d336e5f42960ac86cc1057e9608b5564c2c8571051bb"} Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.663534 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.668227 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" event={"ID":"2e05ad7f-55e9-434c-9714-ab700b1ff7c1","Type":"ContainerStarted","Data":"d48037784852b17f9da82d4883b6e90337d71dcf65342aef33252dad335dcfc3"} Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.668384 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.695046 4832 generic.go:334] "Generic (PLEG): container finished" podID="c8a1ef0f-3cab-47b0-a020-e47fa685335f" containerID="7a6eeef9eabb7c054397649fb29f7706219b41b7d9577fdac1b5e155cf52de99" exitCode=0 Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.695838 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" event={"ID":"c8a1ef0f-3cab-47b0-a020-e47fa685335f","Type":"ContainerDied","Data":"7a6eeef9eabb7c054397649fb29f7706219b41b7d9577fdac1b5e155cf52de99"} Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.703322 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-68c567499b-m74m6" podStartSLOduration=3.703302288 podStartE2EDuration="3.703302288s" podCreationTimestamp="2025-10-02 18:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:59.687197002 +0000 UTC m=+1336.656639874" watchObservedRunningTime="2025-10-02 18:42:59.703302288 +0000 UTC m=+1336.672745160" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.723240 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" podStartSLOduration=3.723222152 podStartE2EDuration="3.723222152s" podCreationTimestamp="2025-10-02 18:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:42:59.714502383 +0000 UTC m=+1336.683945255" watchObservedRunningTime="2025-10-02 18:42:59.723222152 +0000 UTC m=+1336.692665024" Oct 02 18:42:59 crc kubenswrapper[4832]: I1002 18:42:59.968426 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.274774 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.364138 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-dns-swift-storage-0\") pod \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.364229 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9www5\" (UniqueName: \"kubernetes.io/projected/c8a1ef0f-3cab-47b0-a020-e47fa685335f-kube-api-access-9www5\") pod \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.364368 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-ovsdbserver-nb\") pod \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.364385 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-dns-svc\") pod \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.364454 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-config\") pod \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.364475 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-ovsdbserver-sb\") pod \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\" (UID: \"c8a1ef0f-3cab-47b0-a020-e47fa685335f\") " Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.378418 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a1ef0f-3cab-47b0-a020-e47fa685335f-kube-api-access-9www5" (OuterVolumeSpecName: "kube-api-access-9www5") pod "c8a1ef0f-3cab-47b0-a020-e47fa685335f" (UID: "c8a1ef0f-3cab-47b0-a020-e47fa685335f"). InnerVolumeSpecName "kube-api-access-9www5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.467186 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9www5\" (UniqueName: \"kubernetes.io/projected/c8a1ef0f-3cab-47b0-a020-e47fa685335f-kube-api-access-9www5\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.486461 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c8a1ef0f-3cab-47b0-a020-e47fa685335f" (UID: "c8a1ef0f-3cab-47b0-a020-e47fa685335f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.560890 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8a1ef0f-3cab-47b0-a020-e47fa685335f" (UID: "c8a1ef0f-3cab-47b0-a020-e47fa685335f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.574779 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.574810 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.601717 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c8a1ef0f-3cab-47b0-a020-e47fa685335f" (UID: "c8a1ef0f-3cab-47b0-a020-e47fa685335f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.649383 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-config" (OuterVolumeSpecName: "config") pod "c8a1ef0f-3cab-47b0-a020-e47fa685335f" (UID: "c8a1ef0f-3cab-47b0-a020-e47fa685335f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.687139 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.687174 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.695347 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.733777 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c8a1ef0f-3cab-47b0-a020-e47fa685335f" (UID: "c8a1ef0f-3cab-47b0-a020-e47fa685335f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.753360 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-q287c"] Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.779138 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.786800 4832 generic.go:334] "Generic (PLEG): container finished" podID="8b30871f-113e-4bce-a095-64873f95939b" containerID="4bd2b2d83c5892725a74616f0a265cdbbf7892771a663155a367945cde17bf37" exitCode=1 Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.786892 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-564f8674dd-8flcg" event={"ID":"8b30871f-113e-4bce-a095-64873f95939b","Type":"ContainerDied","Data":"4bd2b2d83c5892725a74616f0a265cdbbf7892771a663155a367945cde17bf37"} Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.786927 4832 scope.go:117] "RemoveContainer" containerID="d522588e8b3a8dfb8f2fff69499352daaf54884515e2a41b8510d9653e29139b" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.787900 4832 scope.go:117] "RemoveContainer" containerID="4bd2b2d83c5892725a74616f0a265cdbbf7892771a663155a367945cde17bf37" Oct 02 18:43:00 crc kubenswrapper[4832]: E1002 18:43:00.788133 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-564f8674dd-8flcg_openstack(8b30871f-113e-4bce-a095-64873f95939b)\"" pod="openstack/heat-api-564f8674dd-8flcg" podUID="8b30871f-113e-4bce-a095-64873f95939b" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.789433 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8a1ef0f-3cab-47b0-a020-e47fa685335f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.823581 4832 generic.go:334] "Generic (PLEG): container finished" podID="f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" containerID="5d0bf408441a01d04f83c9abfc5e8995059c10938833a954a86a84cd72127551" exitCode=1 Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.823760 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68c567499b-m74m6" event={"ID":"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54","Type":"ContainerDied","Data":"5d0bf408441a01d04f83c9abfc5e8995059c10938833a954a86a84cd72127551"} Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.824477 4832 scope.go:117] "RemoveContainer" containerID="5d0bf408441a01d04f83c9abfc5e8995059c10938833a954a86a84cd72127551" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.862795 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d","Type":"ContainerStarted","Data":"8a1721dc09acb6672a1d5fde834d67afaf385ae6fb3ef8d0b1da3a6ddf5ffad6"} Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.909385 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" event={"ID":"c8a1ef0f-3cab-47b0-a020-e47fa685335f","Type":"ContainerDied","Data":"fbcf9a2c5873cf5cd7e3fe98f8eb745ed235c9014196084c5bfa325b5d40ad92"} Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.909494 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.968436 4832 generic.go:334] "Generic (PLEG): container finished" podID="2e05ad7f-55e9-434c-9714-ab700b1ff7c1" containerID="d48037784852b17f9da82d4883b6e90337d71dcf65342aef33252dad335dcfc3" exitCode=1 Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.968481 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" event={"ID":"2e05ad7f-55e9-434c-9714-ab700b1ff7c1","Type":"ContainerDied","Data":"d48037784852b17f9da82d4883b6e90337d71dcf65342aef33252dad335dcfc3"} Oct 02 18:43:00 crc kubenswrapper[4832]: I1002 18:43:00.969185 4832 scope.go:117] "RemoveContainer" containerID="d48037784852b17f9da82d4883b6e90337d71dcf65342aef33252dad335dcfc3" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.005357 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76ff85fb9f-z7h2h"] Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.039104 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76ff85fb9f-z7h2h"] Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.057881 4832 scope.go:117] "RemoveContainer" containerID="7a6eeef9eabb7c054397649fb29f7706219b41b7d9577fdac1b5e155cf52de99" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.192399 4832 scope.go:117] "RemoveContainer" containerID="a13dc534b2f72fba7fd6664b99d365597a0460dd21ccfd5f5a358d945e858177" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.242881 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8a1ef0f-3cab-47b0-a020-e47fa685335f" path="/var/lib/kubelet/pods/c8a1ef0f-3cab-47b0-a020-e47fa685335f/volumes" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.297902 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-564f8674dd-8flcg"] Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.333916 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-55b645bf4f-f4w5l"] Oct 02 18:43:01 crc kubenswrapper[4832]: E1002 18:43:01.334457 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a1ef0f-3cab-47b0-a020-e47fa685335f" containerName="dnsmasq-dns" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.334474 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a1ef0f-3cab-47b0-a020-e47fa685335f" containerName="dnsmasq-dns" Oct 02 18:43:01 crc kubenswrapper[4832]: E1002 18:43:01.334521 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a1ef0f-3cab-47b0-a020-e47fa685335f" containerName="init" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.334528 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a1ef0f-3cab-47b0-a020-e47fa685335f" containerName="init" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.334735 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8a1ef0f-3cab-47b0-a020-e47fa685335f" containerName="dnsmasq-dns" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.335536 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.338955 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.339130 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.361506 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-55b645bf4f-f4w5l"] Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.375986 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5bbc7df46d-j8ftx"] Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.376329 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" podUID="f304a9e4-4a4b-4772-89f1-180613213911" containerName="heat-cfnapi" containerID="cri-o://ab6a24107c79e78bd2f29a02ab7c1b5efe845c3b3e50e0523316cdf076cfaf82" gracePeriod=60 Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.405357 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5c9b5ccf5d-z74b9"] Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.407451 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.409927 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.410174 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.497088 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5c9b5ccf5d-z74b9"] Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.504861 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-internal-tls-certs\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.505068 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-internal-tls-certs\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.505154 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85cdp\" (UniqueName: \"kubernetes.io/projected/47747b14-f01e-4098-b420-c8c046a4c97b-kube-api-access-85cdp\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.505285 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-config-data\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.505388 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-config-data-custom\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.505481 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-config-data\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.505564 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-public-tls-certs\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.505680 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-public-tls-certs\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.505771 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-combined-ca-bundle\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.505879 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqqqz\" (UniqueName: \"kubernetes.io/projected/43d18e47-9b7e-43e2-be43-f0ea46363395-kube-api-access-sqqqz\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.506015 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-config-data-custom\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.506099 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-combined-ca-bundle\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.608816 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-config-data-custom\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.609370 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-config-data\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.609510 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-public-tls-certs\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.609668 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-public-tls-certs\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.609841 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-combined-ca-bundle\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.610044 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqqqz\" (UniqueName: \"kubernetes.io/projected/43d18e47-9b7e-43e2-be43-f0ea46363395-kube-api-access-sqqqz\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.610272 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-config-data-custom\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.610417 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-combined-ca-bundle\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.610806 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-internal-tls-certs\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.610952 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-internal-tls-certs\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.611095 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85cdp\" (UniqueName: \"kubernetes.io/projected/47747b14-f01e-4098-b420-c8c046a4c97b-kube-api-access-85cdp\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.611305 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-config-data\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.627799 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-public-tls-certs\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.627813 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-internal-tls-certs\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.628465 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-config-data-custom\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.628926 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-public-tls-certs\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.628957 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-combined-ca-bundle\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.636579 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-internal-tls-certs\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.637473 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-config-data\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.638058 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-config-data\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.652119 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-config-data-custom\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.652764 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-combined-ca-bundle\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.674504 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqqqz\" (UniqueName: \"kubernetes.io/projected/43d18e47-9b7e-43e2-be43-f0ea46363395-kube-api-access-sqqqz\") pod \"heat-api-55b645bf4f-f4w5l\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.675108 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85cdp\" (UniqueName: \"kubernetes.io/projected/47747b14-f01e-4098-b420-c8c046a4c97b-kube-api-access-85cdp\") pod \"heat-cfnapi-5c9b5ccf5d-z74b9\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.700806 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.779672 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.802404 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:43:01 crc kubenswrapper[4832]: I1002 18:43:01.907323 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:43:02 crc kubenswrapper[4832]: I1002 18:43:02.002853 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56","Type":"ContainerStarted","Data":"9566279f6bdf33215ed294af0e96364da8eaf77f64715fbeefe309ab68e1d774"} Oct 02 18:43:02 crc kubenswrapper[4832]: I1002 18:43:02.005425 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" podUID="f304a9e4-4a4b-4772-89f1-180613213911" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.203:8000/healthcheck\": read tcp 10.217.0.2:60266->10.217.0.203:8000: read: connection reset by peer" Oct 02 18:43:02 crc kubenswrapper[4832]: I1002 18:43:02.039523 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-q287c" event={"ID":"ecd658b9-1c22-4778-afde-b392155b499a","Type":"ContainerStarted","Data":"2d09df4e608caaaf5f301323a3b61f91707a623db14f880568ef03f3dbb5d348"} Oct 02 18:43:02 crc kubenswrapper[4832]: I1002 18:43:02.053578 4832 scope.go:117] "RemoveContainer" containerID="4bd2b2d83c5892725a74616f0a265cdbbf7892771a663155a367945cde17bf37" Oct 02 18:43:02 crc kubenswrapper[4832]: E1002 18:43:02.056610 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-564f8674dd-8flcg_openstack(8b30871f-113e-4bce-a095-64873f95939b)\"" pod="openstack/heat-api-564f8674dd-8flcg" podUID="8b30871f-113e-4bce-a095-64873f95939b" Oct 02 18:43:02 crc kubenswrapper[4832]: I1002 18:43:02.086011 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:43:02 crc kubenswrapper[4832]: I1002 18:43:02.096646 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ab3b380b-509b-4012-bbec-74d1dd95c048","Type":"ContainerStarted","Data":"880e1a04746c17a5b3120b4378835fa7c8343741357abac2f616532d39354ed5"} Oct 02 18:43:02 crc kubenswrapper[4832]: I1002 18:43:02.099517 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-68769b5c9-9g8wt" Oct 02 18:43:02 crc kubenswrapper[4832]: I1002 18:43:02.250966 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64945b8848-4m4pr"] Oct 02 18:43:02 crc kubenswrapper[4832]: I1002 18:43:02.251965 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64945b8848-4m4pr" podUID="8d4d6baa-ddff-4604-b621-0b875056aa02" containerName="neutron-api" containerID="cri-o://c82756cc52fc5e198d4fa1e83a718f56d875ce1e120aa93bd487562fe61d6898" gracePeriod=30 Oct 02 18:43:02 crc kubenswrapper[4832]: I1002 18:43:02.252673 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64945b8848-4m4pr" podUID="8d4d6baa-ddff-4604-b621-0b875056aa02" containerName="neutron-httpd" containerID="cri-o://5a2fd4dedbe9c8ee177d5c1311806a637850a091353ca3f053f64e14211ade05" gracePeriod=30 Oct 02 18:43:02 crc kubenswrapper[4832]: I1002 18:43:02.495272 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.156622 4832 generic.go:334] "Generic (PLEG): container finished" podID="f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" containerID="a2d68b5e26da6887d29d472c6bb91752d56e38b88d521dbcc2825cf87bfecbb7" exitCode=1 Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.157683 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68c567499b-m74m6" event={"ID":"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54","Type":"ContainerDied","Data":"a2d68b5e26da6887d29d472c6bb91752d56e38b88d521dbcc2825cf87bfecbb7"} Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.157777 4832 scope.go:117] "RemoveContainer" containerID="5d0bf408441a01d04f83c9abfc5e8995059c10938833a954a86a84cd72127551" Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.158561 4832 scope.go:117] "RemoveContainer" containerID="a2d68b5e26da6887d29d472c6bb91752d56e38b88d521dbcc2825cf87bfecbb7" Oct 02 18:43:03 crc kubenswrapper[4832]: E1002 18:43:03.158918 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-68c567499b-m74m6_openstack(f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54)\"" pod="openstack/heat-api-68c567499b-m74m6" podUID="f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.196282 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" event={"ID":"2e05ad7f-55e9-434c-9714-ab700b1ff7c1","Type":"ContainerStarted","Data":"4379f147e082f7b83825f4ddebcb4a3f8d524b3f97ddc45d93d312628f41729a"} Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.208213 4832 scope.go:117] "RemoveContainer" containerID="4379f147e082f7b83825f4ddebcb4a3f8d524b3f97ddc45d93d312628f41729a" Oct 02 18:43:03 crc kubenswrapper[4832]: E1002 18:43:03.208633 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7db5b7c86d-5r7nc_openstack(2e05ad7f-55e9-434c-9714-ab700b1ff7c1)\"" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" podUID="2e05ad7f-55e9-434c-9714-ab700b1ff7c1" Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.265173 4832 generic.go:334] "Generic (PLEG): container finished" podID="ecd658b9-1c22-4778-afde-b392155b499a" containerID="09b11ad0b68032e8eea590922dfef1b8b899d2bad673182e297e52283b4285f0" exitCode=0 Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.278781 4832 generic.go:334] "Generic (PLEG): container finished" podID="8d4d6baa-ddff-4604-b621-0b875056aa02" containerID="5a2fd4dedbe9c8ee177d5c1311806a637850a091353ca3f053f64e14211ade05" exitCode=0 Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.323247 4832 generic.go:334] "Generic (PLEG): container finished" podID="f304a9e4-4a4b-4772-89f1-180613213911" containerID="ab6a24107c79e78bd2f29a02ab7c1b5efe845c3b3e50e0523316cdf076cfaf82" exitCode=0 Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.358452 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-q287c" event={"ID":"ecd658b9-1c22-4778-afde-b392155b499a","Type":"ContainerDied","Data":"09b11ad0b68032e8eea590922dfef1b8b899d2bad673182e297e52283b4285f0"} Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.358691 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64945b8848-4m4pr" event={"ID":"8d4d6baa-ddff-4604-b621-0b875056aa02","Type":"ContainerDied","Data":"5a2fd4dedbe9c8ee177d5c1311806a637850a091353ca3f053f64e14211ade05"} Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.358706 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" event={"ID":"f304a9e4-4a4b-4772-89f1-180613213911","Type":"ContainerDied","Data":"ab6a24107c79e78bd2f29a02ab7c1b5efe845c3b3e50e0523316cdf076cfaf82"} Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.448795 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-55b645bf4f-f4w5l"] Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.501224 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.562076 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-config-data\") pod \"f304a9e4-4a4b-4772-89f1-180613213911\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.562181 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-config-data-custom\") pod \"f304a9e4-4a4b-4772-89f1-180613213911\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.562453 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjgrq\" (UniqueName: \"kubernetes.io/projected/f304a9e4-4a4b-4772-89f1-180613213911-kube-api-access-pjgrq\") pod \"f304a9e4-4a4b-4772-89f1-180613213911\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.562934 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-combined-ca-bundle\") pod \"f304a9e4-4a4b-4772-89f1-180613213911\" (UID: \"f304a9e4-4a4b-4772-89f1-180613213911\") " Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.569581 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f304a9e4-4a4b-4772-89f1-180613213911" (UID: "f304a9e4-4a4b-4772-89f1-180613213911"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.574971 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f304a9e4-4a4b-4772-89f1-180613213911-kube-api-access-pjgrq" (OuterVolumeSpecName: "kube-api-access-pjgrq") pod "f304a9e4-4a4b-4772-89f1-180613213911" (UID: "f304a9e4-4a4b-4772-89f1-180613213911"). InnerVolumeSpecName "kube-api-access-pjgrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.631062 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f304a9e4-4a4b-4772-89f1-180613213911" (UID: "f304a9e4-4a4b-4772-89f1-180613213911"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.667284 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjgrq\" (UniqueName: \"kubernetes.io/projected/f304a9e4-4a4b-4772-89f1-180613213911-kube-api-access-pjgrq\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.667319 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.667329 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.686092 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-config-data" (OuterVolumeSpecName: "config-data") pod "f304a9e4-4a4b-4772-89f1-180613213911" (UID: "f304a9e4-4a4b-4772-89f1-180613213911"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.777969 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f304a9e4-4a4b-4772-89f1-180613213911-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:03 crc kubenswrapper[4832]: I1002 18:43:03.899608 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5c9b5ccf5d-z74b9"] Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.026634 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.027649 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5f6f67fd59-pbxsj" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.391741 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" event={"ID":"47747b14-f01e-4098-b420-c8c046a4c97b","Type":"ContainerStarted","Data":"65c9d6f892c43a1005a4e6b40c515df53cedc9b8d1d9ab3fd0a1ded2e8b6b30d"} Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.484354 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-55b645bf4f-f4w5l" event={"ID":"43d18e47-9b7e-43e2-be43-f0ea46363395","Type":"ContainerStarted","Data":"1a2ba0c29e4f106af5769d0939ef0912a2a38b323d5f3098f26817b4605b07aa"} Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.509706 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.511446 4832 generic.go:334] "Generic (PLEG): container finished" podID="2e05ad7f-55e9-434c-9714-ab700b1ff7c1" containerID="4379f147e082f7b83825f4ddebcb4a3f8d524b3f97ddc45d93d312628f41729a" exitCode=1 Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.511579 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" event={"ID":"2e05ad7f-55e9-434c-9714-ab700b1ff7c1","Type":"ContainerDied","Data":"4379f147e082f7b83825f4ddebcb4a3f8d524b3f97ddc45d93d312628f41729a"} Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.511616 4832 scope.go:117] "RemoveContainer" containerID="d48037784852b17f9da82d4883b6e90337d71dcf65342aef33252dad335dcfc3" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.512395 4832 scope.go:117] "RemoveContainer" containerID="4379f147e082f7b83825f4ddebcb4a3f8d524b3f97ddc45d93d312628f41729a" Oct 02 18:43:04 crc kubenswrapper[4832]: E1002 18:43:04.512603 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7db5b7c86d-5r7nc_openstack(2e05ad7f-55e9-434c-9714-ab700b1ff7c1)\"" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" podUID="2e05ad7f-55e9-434c-9714-ab700b1ff7c1" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.531083 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56","Type":"ContainerStarted","Data":"f7f39a05fa81814b16fb766708843b1ecf8db7b8fa89070befa4bb6403e9bfa2"} Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.556173 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-q287c" event={"ID":"ecd658b9-1c22-4778-afde-b392155b499a","Type":"ContainerStarted","Data":"7e538ab23eed35a72ba35c848b06cc5c8bc855896563d008d0a6e697c2c7a86f"} Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.557099 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.572047 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" event={"ID":"f304a9e4-4a4b-4772-89f1-180613213911","Type":"ContainerDied","Data":"fe6a77f41541120348dcf0d389e19e71f72a35f525f10674bd9f785df5da13af"} Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.572076 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5bbc7df46d-j8ftx" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.601545 4832 scope.go:117] "RemoveContainer" containerID="a2d68b5e26da6887d29d472c6bb91752d56e38b88d521dbcc2825cf87bfecbb7" Oct 02 18:43:04 crc kubenswrapper[4832]: E1002 18:43:04.601982 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-68c567499b-m74m6_openstack(f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54)\"" pod="openstack/heat-api-68c567499b-m74m6" podUID="f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.614123 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ab3b380b-509b-4012-bbec-74d1dd95c048","Type":"ContainerStarted","Data":"e372e92c81b2d7de7473348ce116710a4a33dfd86e8cba86c7990791f209fc32"} Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.642063 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-config-data\") pod \"8b30871f-113e-4bce-a095-64873f95939b\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.642647 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68rtw\" (UniqueName: \"kubernetes.io/projected/8b30871f-113e-4bce-a095-64873f95939b-kube-api-access-68rtw\") pod \"8b30871f-113e-4bce-a095-64873f95939b\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.642759 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-combined-ca-bundle\") pod \"8b30871f-113e-4bce-a095-64873f95939b\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.642948 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-config-data-custom\") pod \"8b30871f-113e-4bce-a095-64873f95939b\" (UID: \"8b30871f-113e-4bce-a095-64873f95939b\") " Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.668255 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="ceilometer-central-agent" containerID="cri-o://a62e6f0326571bd65e097950ceb1aa99028ab8581aa995b72a2fdcd24fe13bdc" gracePeriod=30 Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.668521 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d","Type":"ContainerStarted","Data":"d3fc5e95536063dc7410ec49164d9c0a3081797f156576c865d6913985a213fd"} Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.668560 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.668842 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="proxy-httpd" containerID="cri-o://d3fc5e95536063dc7410ec49164d9c0a3081797f156576c865d6913985a213fd" gracePeriod=30 Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.668913 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="sg-core" containerID="cri-o://8a1721dc09acb6672a1d5fde834d67afaf385ae6fb3ef8d0b1da3a6ddf5ffad6" gracePeriod=30 Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.668947 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="ceilometer-notification-agent" containerID="cri-o://f55548eea6bb4a546c365d57e971114b76f0d5496a87dece782ab9620860d95e" gracePeriod=30 Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.670129 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-q287c" podStartSLOduration=6.670104816 podStartE2EDuration="6.670104816s" podCreationTimestamp="2025-10-02 18:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:43:04.589402441 +0000 UTC m=+1341.558845313" watchObservedRunningTime="2025-10-02 18:43:04.670104816 +0000 UTC m=+1341.639547688" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.692656 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b30871f-113e-4bce-a095-64873f95939b-kube-api-access-68rtw" (OuterVolumeSpecName: "kube-api-access-68rtw") pod "8b30871f-113e-4bce-a095-64873f95939b" (UID: "8b30871f-113e-4bce-a095-64873f95939b"). InnerVolumeSpecName "kube-api-access-68rtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.697293 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8b30871f-113e-4bce-a095-64873f95939b" (UID: "8b30871f-113e-4bce-a095-64873f95939b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.744908 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.253727687 podStartE2EDuration="12.74487086s" podCreationTimestamp="2025-10-02 18:42:52 +0000 UTC" firstStartedPulling="2025-10-02 18:42:53.174878776 +0000 UTC m=+1330.144321648" lastFinishedPulling="2025-10-02 18:43:01.666021949 +0000 UTC m=+1338.635464821" observedRunningTime="2025-10-02 18:43:04.692216268 +0000 UTC m=+1341.661659130" watchObservedRunningTime="2025-10-02 18:43:04.74487086 +0000 UTC m=+1341.714313732" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.745640 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68rtw\" (UniqueName: \"kubernetes.io/projected/8b30871f-113e-4bce-a095-64873f95939b-kube-api-access-68rtw\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.748368 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.825419 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-config-data" (OuterVolumeSpecName: "config-data") pod "8b30871f-113e-4bce-a095-64873f95939b" (UID: "8b30871f-113e-4bce-a095-64873f95939b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.851380 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.888941 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76ff85fb9f-z7h2h" podUID="c8a1ef0f-3cab-47b0-a020-e47fa685335f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: i/o timeout" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.982652 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b30871f-113e-4bce-a095-64873f95939b" (UID: "8b30871f-113e-4bce-a095-64873f95939b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:04 crc kubenswrapper[4832]: I1002 18:43:04.988177 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5bbc7df46d-j8ftx"] Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.001951 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5bbc7df46d-j8ftx"] Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.057663 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b30871f-113e-4bce-a095-64873f95939b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.180890 4832 scope.go:117] "RemoveContainer" containerID="ab6a24107c79e78bd2f29a02ab7c1b5efe845c3b3e50e0523316cdf076cfaf82" Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.267985 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f304a9e4-4a4b-4772-89f1-180613213911" path="/var/lib/kubelet/pods/f304a9e4-4a4b-4772-89f1-180613213911/volumes" Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.681605 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-564f8674dd-8flcg" event={"ID":"8b30871f-113e-4bce-a095-64873f95939b","Type":"ContainerDied","Data":"edb7490f91ad65fd86c6caa6abd8f84d2f939fd9f27d4ccc0fa12c2def6711da"} Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.681682 4832 scope.go:117] "RemoveContainer" containerID="4bd2b2d83c5892725a74616f0a265cdbbf7892771a663155a367945cde17bf37" Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.681784 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-564f8674dd-8flcg" Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.691113 4832 scope.go:117] "RemoveContainer" containerID="4379f147e082f7b83825f4ddebcb4a3f8d524b3f97ddc45d93d312628f41729a" Oct 02 18:43:05 crc kubenswrapper[4832]: E1002 18:43:05.691779 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7db5b7c86d-5r7nc_openstack(2e05ad7f-55e9-434c-9714-ab700b1ff7c1)\"" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" podUID="2e05ad7f-55e9-434c-9714-ab700b1ff7c1" Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.698251 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ab3b380b-509b-4012-bbec-74d1dd95c048" containerName="cinder-api-log" containerID="cri-o://e372e92c81b2d7de7473348ce116710a4a33dfd86e8cba86c7990791f209fc32" gracePeriod=30 Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.698402 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ab3b380b-509b-4012-bbec-74d1dd95c048","Type":"ContainerStarted","Data":"b5409c85639be16258a48f07ab62fb22f93ce16923f6a42978876a25c374064c"} Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.698452 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.698488 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ab3b380b-509b-4012-bbec-74d1dd95c048" containerName="cinder-api" containerID="cri-o://b5409c85639be16258a48f07ab62fb22f93ce16923f6a42978876a25c374064c" gracePeriod=30 Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.724327 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-564f8674dd-8flcg"] Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.728131 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-564f8674dd-8flcg"] Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.730773 4832 generic.go:334] "Generic (PLEG): container finished" podID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerID="8a1721dc09acb6672a1d5fde834d67afaf385ae6fb3ef8d0b1da3a6ddf5ffad6" exitCode=2 Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.730802 4832 generic.go:334] "Generic (PLEG): container finished" podID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerID="f55548eea6bb4a546c365d57e971114b76f0d5496a87dece782ab9620860d95e" exitCode=0 Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.730839 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d","Type":"ContainerDied","Data":"8a1721dc09acb6672a1d5fde834d67afaf385ae6fb3ef8d0b1da3a6ddf5ffad6"} Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.730866 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d","Type":"ContainerDied","Data":"f55548eea6bb4a546c365d57e971114b76f0d5496a87dece782ab9620860d95e"} Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.734564 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" event={"ID":"47747b14-f01e-4098-b420-c8c046a4c97b","Type":"ContainerStarted","Data":"7500001a722414de123d7f2a94f13d6171be72038878a977964e5d8664c70b56"} Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.735536 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.744719 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.744700509 podStartE2EDuration="7.744700509s" podCreationTimestamp="2025-10-02 18:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:43:05.744199754 +0000 UTC m=+1342.713642626" watchObservedRunningTime="2025-10-02 18:43:05.744700509 +0000 UTC m=+1342.714143371" Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.757356 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-55b645bf4f-f4w5l" event={"ID":"43d18e47-9b7e-43e2-be43-f0ea46363395","Type":"ContainerStarted","Data":"962f9bcfe66ab54156048f85ec0f78cf786098e0034e68a616e64dd5fd157f03"} Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.757395 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.761404 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" podStartSLOduration=4.761390683 podStartE2EDuration="4.761390683s" podCreationTimestamp="2025-10-02 18:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:43:05.760045582 +0000 UTC m=+1342.729488454" watchObservedRunningTime="2025-10-02 18:43:05.761390683 +0000 UTC m=+1342.730833555" Oct 02 18:43:05 crc kubenswrapper[4832]: I1002 18:43:05.788857 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-55b645bf4f-f4w5l" podStartSLOduration=4.788836278 podStartE2EDuration="4.788836278s" podCreationTimestamp="2025-10-02 18:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:43:05.776768076 +0000 UTC m=+1342.746210958" watchObservedRunningTime="2025-10-02 18:43:05.788836278 +0000 UTC m=+1342.758279150" Oct 02 18:43:06 crc kubenswrapper[4832]: I1002 18:43:06.769823 4832 generic.go:334] "Generic (PLEG): container finished" podID="8d4d6baa-ddff-4604-b621-0b875056aa02" containerID="c82756cc52fc5e198d4fa1e83a718f56d875ce1e120aa93bd487562fe61d6898" exitCode=0 Oct 02 18:43:06 crc kubenswrapper[4832]: I1002 18:43:06.769929 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64945b8848-4m4pr" event={"ID":"8d4d6baa-ddff-4604-b621-0b875056aa02","Type":"ContainerDied","Data":"c82756cc52fc5e198d4fa1e83a718f56d875ce1e120aa93bd487562fe61d6898"} Oct 02 18:43:06 crc kubenswrapper[4832]: I1002 18:43:06.774165 4832 generic.go:334] "Generic (PLEG): container finished" podID="ab3b380b-509b-4012-bbec-74d1dd95c048" containerID="e372e92c81b2d7de7473348ce116710a4a33dfd86e8cba86c7990791f209fc32" exitCode=143 Oct 02 18:43:06 crc kubenswrapper[4832]: I1002 18:43:06.774312 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ab3b380b-509b-4012-bbec-74d1dd95c048","Type":"ContainerDied","Data":"e372e92c81b2d7de7473348ce116710a4a33dfd86e8cba86c7990791f209fc32"} Oct 02 18:43:06 crc kubenswrapper[4832]: I1002 18:43:06.777788 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56","Type":"ContainerStarted","Data":"f5cba00e18b064c6ce39dcb22cac593a336a1571f9c0ffbd776df05d3abedc32"} Oct 02 18:43:06 crc kubenswrapper[4832]: I1002 18:43:06.800516 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:43:06 crc kubenswrapper[4832]: I1002 18:43:06.801409 4832 scope.go:117] "RemoveContainer" containerID="a2d68b5e26da6887d29d472c6bb91752d56e38b88d521dbcc2825cf87bfecbb7" Oct 02 18:43:06 crc kubenswrapper[4832]: E1002 18:43:06.801711 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-68c567499b-m74m6_openstack(f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54)\"" pod="openstack/heat-api-68c567499b-m74m6" podUID="f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" Oct 02 18:43:06 crc kubenswrapper[4832]: I1002 18:43:06.810534 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.1862102740000005 podStartE2EDuration="8.81051026s" podCreationTimestamp="2025-10-02 18:42:58 +0000 UTC" firstStartedPulling="2025-10-02 18:43:00.909142123 +0000 UTC m=+1337.878584995" lastFinishedPulling="2025-10-02 18:43:02.533442109 +0000 UTC m=+1339.502884981" observedRunningTime="2025-10-02 18:43:06.80365577 +0000 UTC m=+1343.773098652" watchObservedRunningTime="2025-10-02 18:43:06.81051026 +0000 UTC m=+1343.779953132" Oct 02 18:43:06 crc kubenswrapper[4832]: I1002 18:43:06.907309 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:43:06 crc kubenswrapper[4832]: I1002 18:43:06.907371 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:43:06 crc kubenswrapper[4832]: I1002 18:43:06.908311 4832 scope.go:117] "RemoveContainer" containerID="4379f147e082f7b83825f4ddebcb4a3f8d524b3f97ddc45d93d312628f41729a" Oct 02 18:43:06 crc kubenswrapper[4832]: E1002 18:43:06.908661 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7db5b7c86d-5r7nc_openstack(2e05ad7f-55e9-434c-9714-ab700b1ff7c1)\"" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" podUID="2e05ad7f-55e9-434c-9714-ab700b1ff7c1" Oct 02 18:43:07 crc kubenswrapper[4832]: I1002 18:43:07.244760 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b30871f-113e-4bce-a095-64873f95939b" path="/var/lib/kubelet/pods/8b30871f-113e-4bce-a095-64873f95939b/volumes" Oct 02 18:43:09 crc kubenswrapper[4832]: I1002 18:43:09.236057 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 02 18:43:09 crc kubenswrapper[4832]: I1002 18:43:09.237813 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.210:8080/\": dial tcp 10.217.0.210:8080: connect: connection refused" Oct 02 18:43:09 crc kubenswrapper[4832]: I1002 18:43:09.314211 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:43:09 crc kubenswrapper[4832]: I1002 18:43:09.380928 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pk84v"] Oct 02 18:43:09 crc kubenswrapper[4832]: I1002 18:43:09.381218 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" podUID="86aa56ca-c6e9-4382-a9aa-fea6afc94ade" containerName="dnsmasq-dns" containerID="cri-o://7ac0278d3fe639ffd8498de78a1f4e384b19b9b4ce20cd83c97a09e7e0284c3a" gracePeriod=10 Oct 02 18:43:09 crc kubenswrapper[4832]: I1002 18:43:09.777544 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:43:09 crc kubenswrapper[4832]: I1002 18:43:09.838758 4832 generic.go:334] "Generic (PLEG): container finished" podID="86aa56ca-c6e9-4382-a9aa-fea6afc94ade" containerID="7ac0278d3fe639ffd8498de78a1f4e384b19b9b4ce20cd83c97a09e7e0284c3a" exitCode=0 Oct 02 18:43:09 crc kubenswrapper[4832]: I1002 18:43:09.838821 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" event={"ID":"86aa56ca-c6e9-4382-a9aa-fea6afc94ade","Type":"ContainerDied","Data":"7ac0278d3fe639ffd8498de78a1f4e384b19b9b4ce20cd83c97a09e7e0284c3a"} Oct 02 18:43:11 crc kubenswrapper[4832]: I1002 18:43:11.870477 4832 generic.go:334] "Generic (PLEG): container finished" podID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerID="a62e6f0326571bd65e097950ceb1aa99028ab8581aa995b72a2fdcd24fe13bdc" exitCode=0 Oct 02 18:43:11 crc kubenswrapper[4832]: I1002 18:43:11.871022 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d","Type":"ContainerDied","Data":"a62e6f0326571bd65e097950ceb1aa99028ab8581aa995b72a2fdcd24fe13bdc"} Oct 02 18:43:14 crc kubenswrapper[4832]: I1002 18:43:14.244021 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:43:14 crc kubenswrapper[4832]: I1002 18:43:14.298895 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-68c567499b-m74m6"] Oct 02 18:43:14 crc kubenswrapper[4832]: I1002 18:43:14.342573 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:43:14 crc kubenswrapper[4832]: I1002 18:43:14.423518 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7db5b7c86d-5r7nc"] Oct 02 18:43:14 crc kubenswrapper[4832]: I1002 18:43:14.614745 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 02 18:43:14 crc kubenswrapper[4832]: I1002 18:43:14.699306 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 18:43:14 crc kubenswrapper[4832]: I1002 18:43:14.859254 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:43:14 crc kubenswrapper[4832]: I1002 18:43:14.922466 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" event={"ID":"86aa56ca-c6e9-4382-a9aa-fea6afc94ade","Type":"ContainerDied","Data":"21203fcf759b2e62d1e4f1e91418983405c4e8a3dc3118ded19e2a6d0cc2c7e6"} Oct 02 18:43:14 crc kubenswrapper[4832]: I1002 18:43:14.922519 4832 scope.go:117] "RemoveContainer" containerID="7ac0278d3fe639ffd8498de78a1f4e384b19b9b4ce20cd83c97a09e7e0284c3a" Oct 02 18:43:14 crc kubenswrapper[4832]: I1002 18:43:14.922635 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:43:14 crc kubenswrapper[4832]: I1002 18:43:14.928228 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ec4cba1f-e0b4-4901-add4-513dc675408e","Type":"ContainerStarted","Data":"63e8ce82473b1a88be1482d5542fa21b5805c03b467cb9e8145c7a52fdb8c7c4"} Oct 02 18:43:14 crc kubenswrapper[4832]: I1002 18:43:14.928487 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" containerName="cinder-scheduler" containerID="cri-o://f7f39a05fa81814b16fb766708843b1ecf8db7b8fa89070befa4bb6403e9bfa2" gracePeriod=30 Oct 02 18:43:14 crc kubenswrapper[4832]: I1002 18:43:14.928561 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" containerName="probe" containerID="cri-o://f5cba00e18b064c6ce39dcb22cac593a336a1571f9c0ffbd776df05d3abedc32" gracePeriod=30 Oct 02 18:43:14 crc kubenswrapper[4832]: I1002 18:43:14.957672 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.387598566 podStartE2EDuration="28.957653437s" podCreationTimestamp="2025-10-02 18:42:46 +0000 UTC" firstStartedPulling="2025-10-02 18:42:47.511096506 +0000 UTC m=+1324.480539378" lastFinishedPulling="2025-10-02 18:43:14.081151377 +0000 UTC m=+1351.050594249" observedRunningTime="2025-10-02 18:43:14.949565658 +0000 UTC m=+1351.919008530" watchObservedRunningTime="2025-10-02 18:43:14.957653437 +0000 UTC m=+1351.927096309" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.001490 4832 scope.go:117] "RemoveContainer" containerID="af9d17e96a559e814fd35a89f639b161c7775a1542d35b3efa562dc15bc72b7f" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.004442 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-dns-swift-storage-0\") pod \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.004702 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-ovsdbserver-nb\") pod \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.004760 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l227d\" (UniqueName: \"kubernetes.io/projected/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-kube-api-access-l227d\") pod \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.004782 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-ovsdbserver-sb\") pod \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.004855 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-dns-svc\") pod \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.004961 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-config\") pod \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\" (UID: \"86aa56ca-c6e9-4382-a9aa-fea6afc94ade\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.016396 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-kube-api-access-l227d" (OuterVolumeSpecName: "kube-api-access-l227d") pod "86aa56ca-c6e9-4382-a9aa-fea6afc94ade" (UID: "86aa56ca-c6e9-4382-a9aa-fea6afc94ade"). InnerVolumeSpecName "kube-api-access-l227d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.107702 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l227d\" (UniqueName: \"kubernetes.io/projected/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-kube-api-access-l227d\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.114973 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-config" (OuterVolumeSpecName: "config") pod "86aa56ca-c6e9-4382-a9aa-fea6afc94ade" (UID: "86aa56ca-c6e9-4382-a9aa-fea6afc94ade"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.127727 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "86aa56ca-c6e9-4382-a9aa-fea6afc94ade" (UID: "86aa56ca-c6e9-4382-a9aa-fea6afc94ade"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.127850 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "86aa56ca-c6e9-4382-a9aa-fea6afc94ade" (UID: "86aa56ca-c6e9-4382-a9aa-fea6afc94ade"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.139180 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "86aa56ca-c6e9-4382-a9aa-fea6afc94ade" (UID: "86aa56ca-c6e9-4382-a9aa-fea6afc94ade"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.151702 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "86aa56ca-c6e9-4382-a9aa-fea6afc94ade" (UID: "86aa56ca-c6e9-4382-a9aa-fea6afc94ade"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.210037 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.210228 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.210334 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.210397 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.210449 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86aa56ca-c6e9-4382-a9aa-fea6afc94ade-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.275334 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.288623 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.308829 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.413621 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfjls\" (UniqueName: \"kubernetes.io/projected/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-kube-api-access-gfjls\") pod \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.413695 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-ovndb-tls-certs\") pod \"8d4d6baa-ddff-4604-b621-0b875056aa02\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.413715 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-config-data\") pod \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.413826 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28sb5\" (UniqueName: \"kubernetes.io/projected/8d4d6baa-ddff-4604-b621-0b875056aa02-kube-api-access-28sb5\") pod \"8d4d6baa-ddff-4604-b621-0b875056aa02\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.413894 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k9gq\" (UniqueName: \"kubernetes.io/projected/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-kube-api-access-5k9gq\") pod \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.413934 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-combined-ca-bundle\") pod \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.413971 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-config-data-custom\") pod \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.413987 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-config-data-custom\") pod \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.414038 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-config\") pod \"8d4d6baa-ddff-4604-b621-0b875056aa02\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.414062 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-config-data\") pod \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\" (UID: \"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.414084 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-combined-ca-bundle\") pod \"8d4d6baa-ddff-4604-b621-0b875056aa02\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.414099 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-httpd-config\") pod \"8d4d6baa-ddff-4604-b621-0b875056aa02\" (UID: \"8d4d6baa-ddff-4604-b621-0b875056aa02\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.414190 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-combined-ca-bundle\") pod \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\" (UID: \"2e05ad7f-55e9-434c-9714-ab700b1ff7c1\") " Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.441367 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8d4d6baa-ddff-4604-b621-0b875056aa02" (UID: "8d4d6baa-ddff-4604-b621-0b875056aa02"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.441403 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-kube-api-access-5k9gq" (OuterVolumeSpecName: "kube-api-access-5k9gq") pod "2e05ad7f-55e9-434c-9714-ab700b1ff7c1" (UID: "2e05ad7f-55e9-434c-9714-ab700b1ff7c1"). InnerVolumeSpecName "kube-api-access-5k9gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.442246 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" (UID: "f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.472748 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2e05ad7f-55e9-434c-9714-ab700b1ff7c1" (UID: "2e05ad7f-55e9-434c-9714-ab700b1ff7c1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.521604 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k9gq\" (UniqueName: \"kubernetes.io/projected/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-kube-api-access-5k9gq\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.521643 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.521652 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.521661 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.522295 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-kube-api-access-gfjls" (OuterVolumeSpecName: "kube-api-access-gfjls") pod "f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" (UID: "f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54"). InnerVolumeSpecName "kube-api-access-gfjls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.524527 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d4d6baa-ddff-4604-b621-0b875056aa02-kube-api-access-28sb5" (OuterVolumeSpecName: "kube-api-access-28sb5") pod "8d4d6baa-ddff-4604-b621-0b875056aa02" (UID: "8d4d6baa-ddff-4604-b621-0b875056aa02"). InnerVolumeSpecName "kube-api-access-28sb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.591373 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e05ad7f-55e9-434c-9714-ab700b1ff7c1" (UID: "2e05ad7f-55e9-434c-9714-ab700b1ff7c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.605614 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" (UID: "f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.624254 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28sb5\" (UniqueName: \"kubernetes.io/projected/8d4d6baa-ddff-4604-b621-0b875056aa02-kube-api-access-28sb5\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.624296 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.624307 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.624315 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfjls\" (UniqueName: \"kubernetes.io/projected/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-kube-api-access-gfjls\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.633852 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d4d6baa-ddff-4604-b621-0b875056aa02" (UID: "8d4d6baa-ddff-4604-b621-0b875056aa02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.656447 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-config-data" (OuterVolumeSpecName: "config-data") pod "f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" (UID: "f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.683058 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-config-data" (OuterVolumeSpecName: "config-data") pod "2e05ad7f-55e9-434c-9714-ab700b1ff7c1" (UID: "2e05ad7f-55e9-434c-9714-ab700b1ff7c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.686411 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-config" (OuterVolumeSpecName: "config") pod "8d4d6baa-ddff-4604-b621-0b875056aa02" (UID: "8d4d6baa-ddff-4604-b621-0b875056aa02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.714406 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8d4d6baa-ddff-4604-b621-0b875056aa02" (UID: "8d4d6baa-ddff-4604-b621-0b875056aa02"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.726550 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.726578 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.726590 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.726601 4832 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4d6baa-ddff-4604-b621-0b875056aa02-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.726611 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e05ad7f-55e9-434c-9714-ab700b1ff7c1-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.940142 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64945b8848-4m4pr" event={"ID":"8d4d6baa-ddff-4604-b621-0b875056aa02","Type":"ContainerDied","Data":"8b624969f20d57a2f2c3d5fb287da274c9e468310b1ad737d597085457781d2e"} Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.940184 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64945b8848-4m4pr" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.940191 4832 scope.go:117] "RemoveContainer" containerID="5a2fd4dedbe9c8ee177d5c1311806a637850a091353ca3f053f64e14211ade05" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.942239 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-68c567499b-m74m6" event={"ID":"f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54","Type":"ContainerDied","Data":"a4f6c538014b077865f6d336e5f42960ac86cc1057e9608b5564c2c8571051bb"} Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.942495 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-68c567499b-m74m6" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.944133 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" event={"ID":"2e05ad7f-55e9-434c-9714-ab700b1ff7c1","Type":"ContainerDied","Data":"ae47eb1d9fe6588e099a4ce3e8e3b1a385557ea7f3c2ce2eb8c99d418e3a65a3"} Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.944200 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7db5b7c86d-5r7nc" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.959003 4832 generic.go:334] "Generic (PLEG): container finished" podID="0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" containerID="f5cba00e18b064c6ce39dcb22cac593a336a1571f9c0ffbd776df05d3abedc32" exitCode=0 Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.959065 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56","Type":"ContainerDied","Data":"f5cba00e18b064c6ce39dcb22cac593a336a1571f9c0ffbd776df05d3abedc32"} Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.973945 4832 scope.go:117] "RemoveContainer" containerID="c82756cc52fc5e198d4fa1e83a718f56d875ce1e120aa93bd487562fe61d6898" Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.978257 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64945b8848-4m4pr"] Oct 02 18:43:15 crc kubenswrapper[4832]: I1002 18:43:15.988380 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64945b8848-4m4pr"] Oct 02 18:43:16 crc kubenswrapper[4832]: I1002 18:43:16.001213 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7db5b7c86d-5r7nc"] Oct 02 18:43:16 crc kubenswrapper[4832]: I1002 18:43:16.010401 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7db5b7c86d-5r7nc"] Oct 02 18:43:16 crc kubenswrapper[4832]: I1002 18:43:16.012297 4832 scope.go:117] "RemoveContainer" containerID="a2d68b5e26da6887d29d472c6bb91752d56e38b88d521dbcc2825cf87bfecbb7" Oct 02 18:43:16 crc kubenswrapper[4832]: I1002 18:43:16.041137 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-68c567499b-m74m6"] Oct 02 18:43:16 crc kubenswrapper[4832]: I1002 18:43:16.066601 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-68c567499b-m74m6"] Oct 02 18:43:16 crc kubenswrapper[4832]: I1002 18:43:16.140522 4832 scope.go:117] "RemoveContainer" containerID="4379f147e082f7b83825f4ddebcb4a3f8d524b3f97ddc45d93d312628f41729a" Oct 02 18:43:16 crc kubenswrapper[4832]: I1002 18:43:16.785952 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:43:16 crc kubenswrapper[4832]: I1002 18:43:16.806076 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 02 18:43:16 crc kubenswrapper[4832]: I1002 18:43:16.832913 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-64677dc65c-wh4zf"] Oct 02 18:43:16 crc kubenswrapper[4832]: I1002 18:43:16.833135 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-64677dc65c-wh4zf" podUID="2132fc2a-d11e-473a-b4ab-15c56ac5debf" containerName="heat-engine" containerID="cri-o://2a484137a645562083e85d0f06ef2a80f24c332fe468822f79d6ad993e6e3d32" gracePeriod=60 Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.236484 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e05ad7f-55e9-434c-9714-ab700b1ff7c1" path="/var/lib/kubelet/pods/2e05ad7f-55e9-434c-9714-ab700b1ff7c1/volumes" Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.237240 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d4d6baa-ddff-4604-b621-0b875056aa02" path="/var/lib/kubelet/pods/8d4d6baa-ddff-4604-b621-0b875056aa02/volumes" Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.237999 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" path="/var/lib/kubelet/pods/f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54/volumes" Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.737606 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.875350 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-etc-machine-id\") pod \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.875429 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-config-data-custom\") pod \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.875450 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szw2n\" (UniqueName: \"kubernetes.io/projected/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-kube-api-access-szw2n\") pod \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.875468 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" (UID: "0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.875512 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-combined-ca-bundle\") pod \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.875577 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-config-data\") pod \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.875671 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-scripts\") pod \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\" (UID: \"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56\") " Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.876104 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.882274 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" (UID: "0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.882407 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-kube-api-access-szw2n" (OuterVolumeSpecName: "kube-api-access-szw2n") pod "0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" (UID: "0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56"). InnerVolumeSpecName "kube-api-access-szw2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.884437 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-scripts" (OuterVolumeSpecName: "scripts") pod "0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" (UID: "0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.957370 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" (UID: "0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.978490 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.978522 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szw2n\" (UniqueName: \"kubernetes.io/projected/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-kube-api-access-szw2n\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.978533 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:17 crc kubenswrapper[4832]: I1002 18:43:17.978550 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.001874 4832 generic.go:334] "Generic (PLEG): container finished" podID="0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" containerID="f7f39a05fa81814b16fb766708843b1ecf8db7b8fa89070befa4bb6403e9bfa2" exitCode=0 Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.001916 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56","Type":"ContainerDied","Data":"f7f39a05fa81814b16fb766708843b1ecf8db7b8fa89070befa4bb6403e9bfa2"} Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.001942 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56","Type":"ContainerDied","Data":"9566279f6bdf33215ed294af0e96364da8eaf77f64715fbeefe309ab68e1d774"} Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.001958 4832 scope.go:117] "RemoveContainer" containerID="f5cba00e18b064c6ce39dcb22cac593a336a1571f9c0ffbd776df05d3abedc32" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.002054 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.026533 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-config-data" (OuterVolumeSpecName: "config-data") pod "0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" (UID: "0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.026603 4832 scope.go:117] "RemoveContainer" containerID="f7f39a05fa81814b16fb766708843b1ecf8db7b8fa89070befa4bb6403e9bfa2" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.045318 4832 scope.go:117] "RemoveContainer" containerID="f5cba00e18b064c6ce39dcb22cac593a336a1571f9c0ffbd776df05d3abedc32" Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.045766 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5cba00e18b064c6ce39dcb22cac593a336a1571f9c0ffbd776df05d3abedc32\": container with ID starting with f5cba00e18b064c6ce39dcb22cac593a336a1571f9c0ffbd776df05d3abedc32 not found: ID does not exist" containerID="f5cba00e18b064c6ce39dcb22cac593a336a1571f9c0ffbd776df05d3abedc32" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.045811 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5cba00e18b064c6ce39dcb22cac593a336a1571f9c0ffbd776df05d3abedc32"} err="failed to get container status \"f5cba00e18b064c6ce39dcb22cac593a336a1571f9c0ffbd776df05d3abedc32\": rpc error: code = NotFound desc = could not find container \"f5cba00e18b064c6ce39dcb22cac593a336a1571f9c0ffbd776df05d3abedc32\": container with ID starting with f5cba00e18b064c6ce39dcb22cac593a336a1571f9c0ffbd776df05d3abedc32 not found: ID does not exist" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.045854 4832 scope.go:117] "RemoveContainer" containerID="f7f39a05fa81814b16fb766708843b1ecf8db7b8fa89070befa4bb6403e9bfa2" Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.046094 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7f39a05fa81814b16fb766708843b1ecf8db7b8fa89070befa4bb6403e9bfa2\": container with ID starting with f7f39a05fa81814b16fb766708843b1ecf8db7b8fa89070befa4bb6403e9bfa2 not found: ID does not exist" containerID="f7f39a05fa81814b16fb766708843b1ecf8db7b8fa89070befa4bb6403e9bfa2" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.046113 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7f39a05fa81814b16fb766708843b1ecf8db7b8fa89070befa4bb6403e9bfa2"} err="failed to get container status \"f7f39a05fa81814b16fb766708843b1ecf8db7b8fa89070befa4bb6403e9bfa2\": rpc error: code = NotFound desc = could not find container \"f7f39a05fa81814b16fb766708843b1ecf8db7b8fa89070befa4bb6403e9bfa2\": container with ID starting with f7f39a05fa81814b16fb766708843b1ecf8db7b8fa89070befa4bb6403e9bfa2 not found: ID does not exist" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.081023 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.335588 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.346249 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.359268 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.362601 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" containerName="cinder-scheduler" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.362646 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" containerName="cinder-scheduler" Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.362680 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e05ad7f-55e9-434c-9714-ab700b1ff7c1" containerName="heat-cfnapi" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.362689 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e05ad7f-55e9-434c-9714-ab700b1ff7c1" containerName="heat-cfnapi" Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.362702 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d4d6baa-ddff-4604-b621-0b875056aa02" containerName="neutron-httpd" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.362711 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d4d6baa-ddff-4604-b621-0b875056aa02" containerName="neutron-httpd" Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.362734 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" containerName="heat-api" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.362742 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" containerName="heat-api" Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.362755 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d4d6baa-ddff-4604-b621-0b875056aa02" containerName="neutron-api" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.362767 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d4d6baa-ddff-4604-b621-0b875056aa02" containerName="neutron-api" Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.362782 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86aa56ca-c6e9-4382-a9aa-fea6afc94ade" containerName="dnsmasq-dns" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.362790 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="86aa56ca-c6e9-4382-a9aa-fea6afc94ade" containerName="dnsmasq-dns" Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.362799 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b30871f-113e-4bce-a095-64873f95939b" containerName="heat-api" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.362806 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b30871f-113e-4bce-a095-64873f95939b" containerName="heat-api" Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.362829 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f304a9e4-4a4b-4772-89f1-180613213911" containerName="heat-cfnapi" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.362836 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f304a9e4-4a4b-4772-89f1-180613213911" containerName="heat-cfnapi" Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.362853 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" containerName="heat-api" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.362860 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" containerName="heat-api" Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.362901 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" containerName="probe" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.362911 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" containerName="probe" Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.362941 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86aa56ca-c6e9-4382-a9aa-fea6afc94ade" containerName="init" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.362950 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="86aa56ca-c6e9-4382-a9aa-fea6afc94ade" containerName="init" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.363358 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e05ad7f-55e9-434c-9714-ab700b1ff7c1" containerName="heat-cfnapi" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.363378 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" containerName="cinder-scheduler" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.363393 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="86aa56ca-c6e9-4382-a9aa-fea6afc94ade" containerName="dnsmasq-dns" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.363404 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" containerName="heat-api" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.363414 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e05ad7f-55e9-434c-9714-ab700b1ff7c1" containerName="heat-cfnapi" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.363430 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d4d6baa-ddff-4604-b621-0b875056aa02" containerName="neutron-httpd" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.363446 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b30871f-113e-4bce-a095-64873f95939b" containerName="heat-api" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.363457 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f304a9e4-4a4b-4772-89f1-180613213911" containerName="heat-cfnapi" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.363474 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f2ab9a-b6e7-4507-9b13-28ac1ffdbf54" containerName="heat-api" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.363498 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d4d6baa-ddff-4604-b621-0b875056aa02" containerName="neutron-api" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.363513 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b30871f-113e-4bce-a095-64873f95939b" containerName="heat-api" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.363528 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" containerName="probe" Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.363774 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b30871f-113e-4bce-a095-64873f95939b" containerName="heat-api" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.363788 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b30871f-113e-4bce-a095-64873f95939b" containerName="heat-api" Oct 02 18:43:18 crc kubenswrapper[4832]: E1002 18:43:18.363831 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e05ad7f-55e9-434c-9714-ab700b1ff7c1" containerName="heat-cfnapi" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.363839 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e05ad7f-55e9-434c-9714-ab700b1ff7c1" containerName="heat-cfnapi" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.365066 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.372430 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.372848 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.489360 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c154f010-097e-4cd5-8833-798bce95b715-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.489507 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wztzc\" (UniqueName: \"kubernetes.io/projected/c154f010-097e-4cd5-8833-798bce95b715-kube-api-access-wztzc\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.489537 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c154f010-097e-4cd5-8833-798bce95b715-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.489574 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c154f010-097e-4cd5-8833-798bce95b715-scripts\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.489615 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c154f010-097e-4cd5-8833-798bce95b715-config-data\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.489631 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c154f010-097e-4cd5-8833-798bce95b715-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.599800 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c154f010-097e-4cd5-8833-798bce95b715-scripts\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.599930 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c154f010-097e-4cd5-8833-798bce95b715-config-data\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.599964 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c154f010-097e-4cd5-8833-798bce95b715-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.600032 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c154f010-097e-4cd5-8833-798bce95b715-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.600210 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wztzc\" (UniqueName: \"kubernetes.io/projected/c154f010-097e-4cd5-8833-798bce95b715-kube-api-access-wztzc\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.600246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c154f010-097e-4cd5-8833-798bce95b715-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.600516 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c154f010-097e-4cd5-8833-798bce95b715-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.604304 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c154f010-097e-4cd5-8833-798bce95b715-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.604644 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c154f010-097e-4cd5-8833-798bce95b715-scripts\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.606323 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c154f010-097e-4cd5-8833-798bce95b715-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.616439 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c154f010-097e-4cd5-8833-798bce95b715-config-data\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.624849 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wztzc\" (UniqueName: \"kubernetes.io/projected/c154f010-097e-4cd5-8833-798bce95b715-kube-api-access-wztzc\") pod \"cinder-scheduler-0\" (UID: \"c154f010-097e-4cd5-8833-798bce95b715\") " pod="openstack/cinder-scheduler-0" Oct 02 18:43:18 crc kubenswrapper[4832]: I1002 18:43:18.691817 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 02 18:43:19 crc kubenswrapper[4832]: I1002 18:43:19.184607 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" podUID="86aa56ca-c6e9-4382-a9aa-fea6afc94ade" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.197:5353: i/o timeout" Oct 02 18:43:19 crc kubenswrapper[4832]: I1002 18:43:19.190316 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 02 18:43:19 crc kubenswrapper[4832]: I1002 18:43:19.240331 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56" path="/var/lib/kubelet/pods/0a854b0b-d5e4-45a4-9c6a-7d1f691bfc56/volumes" Oct 02 18:43:19 crc kubenswrapper[4832]: E1002 18:43:19.660933 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2a484137a645562083e85d0f06ef2a80f24c332fe468822f79d6ad993e6e3d32" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 02 18:43:19 crc kubenswrapper[4832]: E1002 18:43:19.662671 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2a484137a645562083e85d0f06ef2a80f24c332fe468822f79d6ad993e6e3d32" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 02 18:43:19 crc kubenswrapper[4832]: E1002 18:43:19.663898 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2a484137a645562083e85d0f06ef2a80f24c332fe468822f79d6ad993e6e3d32" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 02 18:43:19 crc kubenswrapper[4832]: E1002 18:43:19.663937 4832 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-64677dc65c-wh4zf" podUID="2132fc2a-d11e-473a-b4ab-15c56ac5debf" containerName="heat-engine" Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.037554 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c154f010-097e-4cd5-8833-798bce95b715","Type":"ContainerStarted","Data":"1a3efdc03907df6cab7ae05a12c6efd5cc67d40d289c8f4662740c0b02ea8711"} Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.037938 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c154f010-097e-4cd5-8833-798bce95b715","Type":"ContainerStarted","Data":"fa14e5bf42fae1b90b6a1ed64b8e1f45b055f8df8da8e5f2736866dfa98fa9a0"} Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.351344 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.351583 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="de5b2270-9247-4b59-873f-00cdf454635c" containerName="glance-log" containerID="cri-o://674cd898d8ed1fb6b791a4a0c3f86da99d16b7746346d4f502e66d6207740fc5" gracePeriod=30 Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.351640 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="de5b2270-9247-4b59-873f-00cdf454635c" containerName="glance-httpd" containerID="cri-o://be4cee0b8aabdef3e44030aa89f518ae7be1f57a8e2bbbc94da21de6214f7bab" gracePeriod=30 Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.435755 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-p4lnx"] Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.437626 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p4lnx" Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.449901 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p4lnx"] Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.537502 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-r92d6"] Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.546272 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-r92d6" Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.561581 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-r92d6"] Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.562607 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7qt6\" (UniqueName: \"kubernetes.io/projected/43801b93-9634-4b11-995a-60ce9116aac4-kube-api-access-h7qt6\") pod \"nova-api-db-create-p4lnx\" (UID: \"43801b93-9634-4b11-995a-60ce9116aac4\") " pod="openstack/nova-api-db-create-p4lnx" Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.664376 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mmnd\" (UniqueName: \"kubernetes.io/projected/4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc-kube-api-access-5mmnd\") pod \"nova-cell0-db-create-r92d6\" (UID: \"4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc\") " pod="openstack/nova-cell0-db-create-r92d6" Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.664728 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7qt6\" (UniqueName: \"kubernetes.io/projected/43801b93-9634-4b11-995a-60ce9116aac4-kube-api-access-h7qt6\") pod \"nova-api-db-create-p4lnx\" (UID: \"43801b93-9634-4b11-995a-60ce9116aac4\") " pod="openstack/nova-api-db-create-p4lnx" Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.683112 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7qt6\" (UniqueName: \"kubernetes.io/projected/43801b93-9634-4b11-995a-60ce9116aac4-kube-api-access-h7qt6\") pod \"nova-api-db-create-p4lnx\" (UID: \"43801b93-9634-4b11-995a-60ce9116aac4\") " pod="openstack/nova-api-db-create-p4lnx" Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.725442 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-q9dvz"] Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.728879 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q9dvz" Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.748060 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q9dvz"] Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.760874 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p4lnx" Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.766882 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mmnd\" (UniqueName: \"kubernetes.io/projected/4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc-kube-api-access-5mmnd\") pod \"nova-cell0-db-create-r92d6\" (UID: \"4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc\") " pod="openstack/nova-cell0-db-create-r92d6" Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.787376 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mmnd\" (UniqueName: \"kubernetes.io/projected/4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc-kube-api-access-5mmnd\") pod \"nova-cell0-db-create-r92d6\" (UID: \"4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc\") " pod="openstack/nova-cell0-db-create-r92d6" Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.869844 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wxkl\" (UniqueName: \"kubernetes.io/projected/37fb1e5a-5c6b-41c5-a77a-10ed80318ea4-kube-api-access-2wxkl\") pod \"nova-cell1-db-create-q9dvz\" (UID: \"37fb1e5a-5c6b-41c5-a77a-10ed80318ea4\") " pod="openstack/nova-cell1-db-create-q9dvz" Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.881836 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-r92d6" Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.971762 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wxkl\" (UniqueName: \"kubernetes.io/projected/37fb1e5a-5c6b-41c5-a77a-10ed80318ea4-kube-api-access-2wxkl\") pod \"nova-cell1-db-create-q9dvz\" (UID: \"37fb1e5a-5c6b-41c5-a77a-10ed80318ea4\") " pod="openstack/nova-cell1-db-create-q9dvz" Oct 02 18:43:20 crc kubenswrapper[4832]: I1002 18:43:20.993147 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wxkl\" (UniqueName: \"kubernetes.io/projected/37fb1e5a-5c6b-41c5-a77a-10ed80318ea4-kube-api-access-2wxkl\") pod \"nova-cell1-db-create-q9dvz\" (UID: \"37fb1e5a-5c6b-41c5-a77a-10ed80318ea4\") " pod="openstack/nova-cell1-db-create-q9dvz" Oct 02 18:43:21 crc kubenswrapper[4832]: I1002 18:43:21.063391 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q9dvz" Oct 02 18:43:21 crc kubenswrapper[4832]: I1002 18:43:21.099134 4832 generic.go:334] "Generic (PLEG): container finished" podID="de5b2270-9247-4b59-873f-00cdf454635c" containerID="674cd898d8ed1fb6b791a4a0c3f86da99d16b7746346d4f502e66d6207740fc5" exitCode=143 Oct 02 18:43:21 crc kubenswrapper[4832]: I1002 18:43:21.099216 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"de5b2270-9247-4b59-873f-00cdf454635c","Type":"ContainerDied","Data":"674cd898d8ed1fb6b791a4a0c3f86da99d16b7746346d4f502e66d6207740fc5"} Oct 02 18:43:21 crc kubenswrapper[4832]: I1002 18:43:21.105105 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c154f010-097e-4cd5-8833-798bce95b715","Type":"ContainerStarted","Data":"ef65986a8badbbf159bad04aa660de9c932f91baf9034bc3d8ec2f656b20b00b"} Oct 02 18:43:21 crc kubenswrapper[4832]: I1002 18:43:21.130365 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.130346062 podStartE2EDuration="3.130346062s" podCreationTimestamp="2025-10-02 18:43:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:43:21.124051367 +0000 UTC m=+1358.093494239" watchObservedRunningTime="2025-10-02 18:43:21.130346062 +0000 UTC m=+1358.099788934" Oct 02 18:43:21 crc kubenswrapper[4832]: I1002 18:43:21.332365 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-p4lnx"] Oct 02 18:43:21 crc kubenswrapper[4832]: W1002 18:43:21.558222 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4eeaae3e_253d_4ebc_a3e4_5ebab1635ccc.slice/crio-0ce23e46e02dd402b23268a3bbf123399c6b72198b8dc20edb681a1cfbd9eeff WatchSource:0}: Error finding container 0ce23e46e02dd402b23268a3bbf123399c6b72198b8dc20edb681a1cfbd9eeff: Status 404 returned error can't find the container with id 0ce23e46e02dd402b23268a3bbf123399c6b72198b8dc20edb681a1cfbd9eeff Oct 02 18:43:21 crc kubenswrapper[4832]: I1002 18:43:21.558379 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-r92d6"] Oct 02 18:43:21 crc kubenswrapper[4832]: I1002 18:43:21.947334 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q9dvz"] Oct 02 18:43:22 crc kubenswrapper[4832]: I1002 18:43:22.146630 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p4lnx" event={"ID":"43801b93-9634-4b11-995a-60ce9116aac4","Type":"ContainerStarted","Data":"5dc172bbd4ee3a181801ce87ea53a420f0fbec94a5ee52921498bd103a0e1837"} Oct 02 18:43:22 crc kubenswrapper[4832]: I1002 18:43:22.146943 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p4lnx" event={"ID":"43801b93-9634-4b11-995a-60ce9116aac4","Type":"ContainerStarted","Data":"c6cb6604a64e79708fae0e6c2466cd197e859ad73be2f434fba31ab2da67d555"} Oct 02 18:43:22 crc kubenswrapper[4832]: I1002 18:43:22.152911 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q9dvz" event={"ID":"37fb1e5a-5c6b-41c5-a77a-10ed80318ea4","Type":"ContainerStarted","Data":"4b109ba7ce858bd136690102d48339fbef39b8dcc27595fb351ded95b1168a73"} Oct 02 18:43:22 crc kubenswrapper[4832]: I1002 18:43:22.156711 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-r92d6" event={"ID":"4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc","Type":"ContainerStarted","Data":"0ce23e46e02dd402b23268a3bbf123399c6b72198b8dc20edb681a1cfbd9eeff"} Oct 02 18:43:22 crc kubenswrapper[4832]: I1002 18:43:22.475642 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 02 18:43:23 crc kubenswrapper[4832]: I1002 18:43:23.174982 4832 generic.go:334] "Generic (PLEG): container finished" podID="4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc" containerID="973f91391ff7e877bd23ff27ae14a5b6cc0ddc25bd7dfc566a90c50aa29ebe0f" exitCode=0 Oct 02 18:43:23 crc kubenswrapper[4832]: I1002 18:43:23.175081 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-r92d6" event={"ID":"4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc","Type":"ContainerDied","Data":"973f91391ff7e877bd23ff27ae14a5b6cc0ddc25bd7dfc566a90c50aa29ebe0f"} Oct 02 18:43:23 crc kubenswrapper[4832]: I1002 18:43:23.177944 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:43:23 crc kubenswrapper[4832]: I1002 18:43:23.178278 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eb31897d-9d37-446e-9cde-08d0e12fc428" containerName="glance-log" containerID="cri-o://17957db41c61486cfbf8f9a2a92ca6780f754ee3a70507cee0ff7529315f76a7" gracePeriod=30 Oct 02 18:43:23 crc kubenswrapper[4832]: I1002 18:43:23.178358 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eb31897d-9d37-446e-9cde-08d0e12fc428" containerName="glance-httpd" containerID="cri-o://9c153939e750e4ee27dfcc15613342fa65a3aba2cd37f230b0096894e5a430bd" gracePeriod=30 Oct 02 18:43:23 crc kubenswrapper[4832]: I1002 18:43:23.178497 4832 generic.go:334] "Generic (PLEG): container finished" podID="43801b93-9634-4b11-995a-60ce9116aac4" containerID="5dc172bbd4ee3a181801ce87ea53a420f0fbec94a5ee52921498bd103a0e1837" exitCode=0 Oct 02 18:43:23 crc kubenswrapper[4832]: I1002 18:43:23.178510 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p4lnx" event={"ID":"43801b93-9634-4b11-995a-60ce9116aac4","Type":"ContainerDied","Data":"5dc172bbd4ee3a181801ce87ea53a420f0fbec94a5ee52921498bd103a0e1837"} Oct 02 18:43:23 crc kubenswrapper[4832]: I1002 18:43:23.192297 4832 generic.go:334] "Generic (PLEG): container finished" podID="37fb1e5a-5c6b-41c5-a77a-10ed80318ea4" containerID="2ed7834be22756aae562894ef11a27ea49f844e6f8ce26a212776e76fe3c60d0" exitCode=0 Oct 02 18:43:23 crc kubenswrapper[4832]: I1002 18:43:23.192602 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q9dvz" event={"ID":"37fb1e5a-5c6b-41c5-a77a-10ed80318ea4","Type":"ContainerDied","Data":"2ed7834be22756aae562894ef11a27ea49f844e6f8ce26a212776e76fe3c60d0"} Oct 02 18:43:23 crc kubenswrapper[4832]: I1002 18:43:23.697017 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.203344 4832 generic.go:334] "Generic (PLEG): container finished" podID="eb31897d-9d37-446e-9cde-08d0e12fc428" containerID="17957db41c61486cfbf8f9a2a92ca6780f754ee3a70507cee0ff7529315f76a7" exitCode=143 Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.203420 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb31897d-9d37-446e-9cde-08d0e12fc428","Type":"ContainerDied","Data":"17957db41c61486cfbf8f9a2a92ca6780f754ee3a70507cee0ff7529315f76a7"} Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.206852 4832 generic.go:334] "Generic (PLEG): container finished" podID="de5b2270-9247-4b59-873f-00cdf454635c" containerID="be4cee0b8aabdef3e44030aa89f518ae7be1f57a8e2bbbc94da21de6214f7bab" exitCode=0 Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.207208 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"de5b2270-9247-4b59-873f-00cdf454635c","Type":"ContainerDied","Data":"be4cee0b8aabdef3e44030aa89f518ae7be1f57a8e2bbbc94da21de6214f7bab"} Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.455169 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.593376 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de5b2270-9247-4b59-873f-00cdf454635c-httpd-run\") pod \"de5b2270-9247-4b59-873f-00cdf454635c\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.593610 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-config-data\") pod \"de5b2270-9247-4b59-873f-00cdf454635c\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.593633 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-scripts\") pod \"de5b2270-9247-4b59-873f-00cdf454635c\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.593691 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v5g8\" (UniqueName: \"kubernetes.io/projected/de5b2270-9247-4b59-873f-00cdf454635c-kube-api-access-2v5g8\") pod \"de5b2270-9247-4b59-873f-00cdf454635c\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.593748 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-public-tls-certs\") pod \"de5b2270-9247-4b59-873f-00cdf454635c\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.595550 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-combined-ca-bundle\") pod \"de5b2270-9247-4b59-873f-00cdf454635c\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.595592 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5b2270-9247-4b59-873f-00cdf454635c-logs\") pod \"de5b2270-9247-4b59-873f-00cdf454635c\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.595722 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"de5b2270-9247-4b59-873f-00cdf454635c\" (UID: \"de5b2270-9247-4b59-873f-00cdf454635c\") " Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.599198 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de5b2270-9247-4b59-873f-00cdf454635c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "de5b2270-9247-4b59-873f-00cdf454635c" (UID: "de5b2270-9247-4b59-873f-00cdf454635c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.599713 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de5b2270-9247-4b59-873f-00cdf454635c-logs" (OuterVolumeSpecName: "logs") pod "de5b2270-9247-4b59-873f-00cdf454635c" (UID: "de5b2270-9247-4b59-873f-00cdf454635c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.603608 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de5b2270-9247-4b59-873f-00cdf454635c-kube-api-access-2v5g8" (OuterVolumeSpecName: "kube-api-access-2v5g8") pod "de5b2270-9247-4b59-873f-00cdf454635c" (UID: "de5b2270-9247-4b59-873f-00cdf454635c"). InnerVolumeSpecName "kube-api-access-2v5g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.604254 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-scripts" (OuterVolumeSpecName: "scripts") pod "de5b2270-9247-4b59-873f-00cdf454635c" (UID: "de5b2270-9247-4b59-873f-00cdf454635c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.604498 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "de5b2270-9247-4b59-873f-00cdf454635c" (UID: "de5b2270-9247-4b59-873f-00cdf454635c"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.638815 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de5b2270-9247-4b59-873f-00cdf454635c" (UID: "de5b2270-9247-4b59-873f-00cdf454635c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.681061 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "de5b2270-9247-4b59-873f-00cdf454635c" (UID: "de5b2270-9247-4b59-873f-00cdf454635c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.698544 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.698581 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.698595 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de5b2270-9247-4b59-873f-00cdf454635c-logs\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.698628 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.698652 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/de5b2270-9247-4b59-873f-00cdf454635c-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.698664 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.698675 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v5g8\" (UniqueName: \"kubernetes.io/projected/de5b2270-9247-4b59-873f-00cdf454635c-kube-api-access-2v5g8\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.703504 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-config-data" (OuterVolumeSpecName: "config-data") pod "de5b2270-9247-4b59-873f-00cdf454635c" (UID: "de5b2270-9247-4b59-873f-00cdf454635c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.729532 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.800579 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:24 crc kubenswrapper[4832]: I1002 18:43:24.800610 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de5b2270-9247-4b59-873f-00cdf454635c-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.118932 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q9dvz" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.209587 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wxkl\" (UniqueName: \"kubernetes.io/projected/37fb1e5a-5c6b-41c5-a77a-10ed80318ea4-kube-api-access-2wxkl\") pod \"37fb1e5a-5c6b-41c5-a77a-10ed80318ea4\" (UID: \"37fb1e5a-5c6b-41c5-a77a-10ed80318ea4\") " Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.215993 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fb1e5a-5c6b-41c5-a77a-10ed80318ea4-kube-api-access-2wxkl" (OuterVolumeSpecName: "kube-api-access-2wxkl") pod "37fb1e5a-5c6b-41c5-a77a-10ed80318ea4" (UID: "37fb1e5a-5c6b-41c5-a77a-10ed80318ea4"). InnerVolumeSpecName "kube-api-access-2wxkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.219726 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"de5b2270-9247-4b59-873f-00cdf454635c","Type":"ContainerDied","Data":"8ad6520891ce4b941e519b3a29d198b5dbc59dc9c7274945e50bd798d6bf0e76"} Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.219771 4832 scope.go:117] "RemoveContainer" containerID="be4cee0b8aabdef3e44030aa89f518ae7be1f57a8e2bbbc94da21de6214f7bab" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.219887 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.228698 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q9dvz" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.258241 4832 scope.go:117] "RemoveContainer" containerID="674cd898d8ed1fb6b791a4a0c3f86da99d16b7746346d4f502e66d6207740fc5" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.278439 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q9dvz" event={"ID":"37fb1e5a-5c6b-41c5-a77a-10ed80318ea4","Type":"ContainerDied","Data":"4b109ba7ce858bd136690102d48339fbef39b8dcc27595fb351ded95b1168a73"} Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.278500 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b109ba7ce858bd136690102d48339fbef39b8dcc27595fb351ded95b1168a73" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.290308 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.303310 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.314424 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wxkl\" (UniqueName: \"kubernetes.io/projected/37fb1e5a-5c6b-41c5-a77a-10ed80318ea4-kube-api-access-2wxkl\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.317046 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:43:25 crc kubenswrapper[4832]: E1002 18:43:25.317515 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fb1e5a-5c6b-41c5-a77a-10ed80318ea4" containerName="mariadb-database-create" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.317529 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fb1e5a-5c6b-41c5-a77a-10ed80318ea4" containerName="mariadb-database-create" Oct 02 18:43:25 crc kubenswrapper[4832]: E1002 18:43:25.317542 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de5b2270-9247-4b59-873f-00cdf454635c" containerName="glance-log" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.317548 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="de5b2270-9247-4b59-873f-00cdf454635c" containerName="glance-log" Oct 02 18:43:25 crc kubenswrapper[4832]: E1002 18:43:25.319373 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de5b2270-9247-4b59-873f-00cdf454635c" containerName="glance-httpd" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.319408 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="de5b2270-9247-4b59-873f-00cdf454635c" containerName="glance-httpd" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.319926 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="37fb1e5a-5c6b-41c5-a77a-10ed80318ea4" containerName="mariadb-database-create" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.319939 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="de5b2270-9247-4b59-873f-00cdf454635c" containerName="glance-log" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.319957 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="de5b2270-9247-4b59-873f-00cdf454635c" containerName="glance-httpd" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.321403 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.325224 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.325590 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.328229 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.416466 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlcf8\" (UniqueName: \"kubernetes.io/projected/b7d0ad2c-59e0-4aee-930a-560d811c393c-kube-api-access-nlcf8\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.416565 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7d0ad2c-59e0-4aee-930a-560d811c393c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.416590 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7d0ad2c-59e0-4aee-930a-560d811c393c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.416640 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7d0ad2c-59e0-4aee-930a-560d811c393c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.416694 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d0ad2c-59e0-4aee-930a-560d811c393c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.416715 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.416773 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7d0ad2c-59e0-4aee-930a-560d811c393c-logs\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.416793 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7d0ad2c-59e0-4aee-930a-560d811c393c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.455821 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-r92d6" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.465403 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p4lnx" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.517634 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7qt6\" (UniqueName: \"kubernetes.io/projected/43801b93-9634-4b11-995a-60ce9116aac4-kube-api-access-h7qt6\") pod \"43801b93-9634-4b11-995a-60ce9116aac4\" (UID: \"43801b93-9634-4b11-995a-60ce9116aac4\") " Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.517963 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mmnd\" (UniqueName: \"kubernetes.io/projected/4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc-kube-api-access-5mmnd\") pod \"4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc\" (UID: \"4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc\") " Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.518325 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7d0ad2c-59e0-4aee-930a-560d811c393c-logs\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.518352 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7d0ad2c-59e0-4aee-930a-560d811c393c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.518410 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlcf8\" (UniqueName: \"kubernetes.io/projected/b7d0ad2c-59e0-4aee-930a-560d811c393c-kube-api-access-nlcf8\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.518458 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7d0ad2c-59e0-4aee-930a-560d811c393c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.518495 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7d0ad2c-59e0-4aee-930a-560d811c393c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.518552 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7d0ad2c-59e0-4aee-930a-560d811c393c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.518609 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d0ad2c-59e0-4aee-930a-560d811c393c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.518634 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.518702 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7d0ad2c-59e0-4aee-930a-560d811c393c-logs\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.518999 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.519238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b7d0ad2c-59e0-4aee-930a-560d811c393c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.524163 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43801b93-9634-4b11-995a-60ce9116aac4-kube-api-access-h7qt6" (OuterVolumeSpecName: "kube-api-access-h7qt6") pod "43801b93-9634-4b11-995a-60ce9116aac4" (UID: "43801b93-9634-4b11-995a-60ce9116aac4"). InnerVolumeSpecName "kube-api-access-h7qt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.524214 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc-kube-api-access-5mmnd" (OuterVolumeSpecName: "kube-api-access-5mmnd") pod "4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc" (UID: "4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc"). InnerVolumeSpecName "kube-api-access-5mmnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.527463 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7d0ad2c-59e0-4aee-930a-560d811c393c-scripts\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.527529 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7d0ad2c-59e0-4aee-930a-560d811c393c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.530000 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7d0ad2c-59e0-4aee-930a-560d811c393c-config-data\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.536712 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7d0ad2c-59e0-4aee-930a-560d811c393c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.544587 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlcf8\" (UniqueName: \"kubernetes.io/projected/b7d0ad2c-59e0-4aee-930a-560d811c393c-kube-api-access-nlcf8\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.565081 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"b7d0ad2c-59e0-4aee-930a-560d811c393c\") " pod="openstack/glance-default-external-api-0" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.620104 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7qt6\" (UniqueName: \"kubernetes.io/projected/43801b93-9634-4b11-995a-60ce9116aac4-kube-api-access-h7qt6\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.620137 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mmnd\" (UniqueName: \"kubernetes.io/projected/4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc-kube-api-access-5mmnd\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:25 crc kubenswrapper[4832]: I1002 18:43:25.750391 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 02 18:43:26 crc kubenswrapper[4832]: I1002 18:43:26.290924 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-r92d6" event={"ID":"4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc","Type":"ContainerDied","Data":"0ce23e46e02dd402b23268a3bbf123399c6b72198b8dc20edb681a1cfbd9eeff"} Oct 02 18:43:26 crc kubenswrapper[4832]: I1002 18:43:26.291456 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ce23e46e02dd402b23268a3bbf123399c6b72198b8dc20edb681a1cfbd9eeff" Oct 02 18:43:26 crc kubenswrapper[4832]: I1002 18:43:26.291652 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-r92d6" Oct 02 18:43:26 crc kubenswrapper[4832]: I1002 18:43:26.319500 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-p4lnx" event={"ID":"43801b93-9634-4b11-995a-60ce9116aac4","Type":"ContainerDied","Data":"c6cb6604a64e79708fae0e6c2466cd197e859ad73be2f434fba31ab2da67d555"} Oct 02 18:43:26 crc kubenswrapper[4832]: I1002 18:43:26.319554 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6cb6604a64e79708fae0e6c2466cd197e859ad73be2f434fba31ab2da67d555" Oct 02 18:43:26 crc kubenswrapper[4832]: I1002 18:43:26.319694 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-p4lnx" Oct 02 18:43:26 crc kubenswrapper[4832]: W1002 18:43:26.446719 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7d0ad2c_59e0_4aee_930a_560d811c393c.slice/crio-296eea95cd9e787e62975249d79a871e344a21940425777e44693d6c1e49d58d WatchSource:0}: Error finding container 296eea95cd9e787e62975249d79a871e344a21940425777e44693d6c1e49d58d: Status 404 returned error can't find the container with id 296eea95cd9e787e62975249d79a871e344a21940425777e44693d6c1e49d58d Oct 02 18:43:26 crc kubenswrapper[4832]: I1002 18:43:26.454392 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.241720 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de5b2270-9247-4b59-873f-00cdf454635c" path="/var/lib/kubelet/pods/de5b2270-9247-4b59-873f-00cdf454635c/volumes" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.337585 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b7d0ad2c-59e0-4aee-930a-560d811c393c","Type":"ContainerStarted","Data":"153da7f3502451a364a151810f3851d842b3e9228d3cc7bcad74e389dd2d6c0b"} Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.337842 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b7d0ad2c-59e0-4aee-930a-560d811c393c","Type":"ContainerStarted","Data":"296eea95cd9e787e62975249d79a871e344a21940425777e44693d6c1e49d58d"} Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.339852 4832 generic.go:334] "Generic (PLEG): container finished" podID="eb31897d-9d37-446e-9cde-08d0e12fc428" containerID="9c153939e750e4ee27dfcc15613342fa65a3aba2cd37f230b0096894e5a430bd" exitCode=0 Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.339876 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb31897d-9d37-446e-9cde-08d0e12fc428","Type":"ContainerDied","Data":"9c153939e750e4ee27dfcc15613342fa65a3aba2cd37f230b0096894e5a430bd"} Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.339891 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eb31897d-9d37-446e-9cde-08d0e12fc428","Type":"ContainerDied","Data":"ea7c56892141aab76ca9ced69b3b9f173f9a3ead307a7f80a203fd868d41f0b1"} Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.339902 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea7c56892141aab76ca9ced69b3b9f173f9a3ead307a7f80a203fd868d41f0b1" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.433406 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.471010 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb31897d-9d37-446e-9cde-08d0e12fc428-httpd-run\") pod \"eb31897d-9d37-446e-9cde-08d0e12fc428\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.471054 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-config-data\") pod \"eb31897d-9d37-446e-9cde-08d0e12fc428\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.471108 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb31897d-9d37-446e-9cde-08d0e12fc428-logs\") pod \"eb31897d-9d37-446e-9cde-08d0e12fc428\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.471189 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-combined-ca-bundle\") pod \"eb31897d-9d37-446e-9cde-08d0e12fc428\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.471229 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-scripts\") pod \"eb31897d-9d37-446e-9cde-08d0e12fc428\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.471286 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-internal-tls-certs\") pod \"eb31897d-9d37-446e-9cde-08d0e12fc428\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.471463 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"eb31897d-9d37-446e-9cde-08d0e12fc428\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.471486 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhnlh\" (UniqueName: \"kubernetes.io/projected/eb31897d-9d37-446e-9cde-08d0e12fc428-kube-api-access-dhnlh\") pod \"eb31897d-9d37-446e-9cde-08d0e12fc428\" (UID: \"eb31897d-9d37-446e-9cde-08d0e12fc428\") " Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.486813 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb31897d-9d37-446e-9cde-08d0e12fc428-logs" (OuterVolumeSpecName: "logs") pod "eb31897d-9d37-446e-9cde-08d0e12fc428" (UID: "eb31897d-9d37-446e-9cde-08d0e12fc428"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.487028 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb31897d-9d37-446e-9cde-08d0e12fc428-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eb31897d-9d37-446e-9cde-08d0e12fc428" (UID: "eb31897d-9d37-446e-9cde-08d0e12fc428"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.493212 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb31897d-9d37-446e-9cde-08d0e12fc428-kube-api-access-dhnlh" (OuterVolumeSpecName: "kube-api-access-dhnlh") pod "eb31897d-9d37-446e-9cde-08d0e12fc428" (UID: "eb31897d-9d37-446e-9cde-08d0e12fc428"). InnerVolumeSpecName "kube-api-access-dhnlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.521416 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-scripts" (OuterVolumeSpecName: "scripts") pod "eb31897d-9d37-446e-9cde-08d0e12fc428" (UID: "eb31897d-9d37-446e-9cde-08d0e12fc428"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.527791 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "eb31897d-9d37-446e-9cde-08d0e12fc428" (UID: "eb31897d-9d37-446e-9cde-08d0e12fc428"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.576774 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.576807 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhnlh\" (UniqueName: \"kubernetes.io/projected/eb31897d-9d37-446e-9cde-08d0e12fc428-kube-api-access-dhnlh\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.576816 4832 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb31897d-9d37-446e-9cde-08d0e12fc428-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.576829 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb31897d-9d37-446e-9cde-08d0e12fc428-logs\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.576836 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.613881 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb31897d-9d37-446e-9cde-08d0e12fc428" (UID: "eb31897d-9d37-446e-9cde-08d0e12fc428"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.660933 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eb31897d-9d37-446e-9cde-08d0e12fc428" (UID: "eb31897d-9d37-446e-9cde-08d0e12fc428"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.680734 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.680766 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.681650 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.709608 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-config-data" (OuterVolumeSpecName: "config-data") pod "eb31897d-9d37-446e-9cde-08d0e12fc428" (UID: "eb31897d-9d37-446e-9cde-08d0e12fc428"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.782840 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb31897d-9d37-446e-9cde-08d0e12fc428-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:27 crc kubenswrapper[4832]: I1002 18:43:27.782870 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.354932 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b7d0ad2c-59e0-4aee-930a-560d811c393c","Type":"ContainerStarted","Data":"96e670a6d8a5468cc9263ab2138b184e818f2814622e22bc74cace50c168f939"} Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.390391 4832 generic.go:334] "Generic (PLEG): container finished" podID="2132fc2a-d11e-473a-b4ab-15c56ac5debf" containerID="2a484137a645562083e85d0f06ef2a80f24c332fe468822f79d6ad993e6e3d32" exitCode=0 Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.390480 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.390506 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-64677dc65c-wh4zf" event={"ID":"2132fc2a-d11e-473a-b4ab-15c56ac5debf","Type":"ContainerDied","Data":"2a484137a645562083e85d0f06ef2a80f24c332fe468822f79d6ad993e6e3d32"} Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.405840 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.4058203479999998 podStartE2EDuration="3.405820348s" podCreationTimestamp="2025-10-02 18:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:43:28.397170231 +0000 UTC m=+1365.366613103" watchObservedRunningTime="2025-10-02 18:43:28.405820348 +0000 UTC m=+1365.375263220" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.474334 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.491996 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.517484 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:43:28 crc kubenswrapper[4832]: E1002 18:43:28.518301 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb31897d-9d37-446e-9cde-08d0e12fc428" containerName="glance-httpd" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.518321 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb31897d-9d37-446e-9cde-08d0e12fc428" containerName="glance-httpd" Oct 02 18:43:28 crc kubenswrapper[4832]: E1002 18:43:28.518337 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc" containerName="mariadb-database-create" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.518345 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc" containerName="mariadb-database-create" Oct 02 18:43:28 crc kubenswrapper[4832]: E1002 18:43:28.518382 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb31897d-9d37-446e-9cde-08d0e12fc428" containerName="glance-log" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.518389 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb31897d-9d37-446e-9cde-08d0e12fc428" containerName="glance-log" Oct 02 18:43:28 crc kubenswrapper[4832]: E1002 18:43:28.518412 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43801b93-9634-4b11-995a-60ce9116aac4" containerName="mariadb-database-create" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.518418 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="43801b93-9634-4b11-995a-60ce9116aac4" containerName="mariadb-database-create" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.518614 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc" containerName="mariadb-database-create" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.518624 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb31897d-9d37-446e-9cde-08d0e12fc428" containerName="glance-log" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.518632 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb31897d-9d37-446e-9cde-08d0e12fc428" containerName="glance-httpd" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.518648 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="43801b93-9634-4b11-995a-60ce9116aac4" containerName="mariadb-database-create" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.520114 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.529119 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.529304 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.533251 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.601542 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3190ea6-2c6f-4fb9-a33a-768462224416-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.601599 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3190ea6-2c6f-4fb9-a33a-768462224416-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.601858 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3190ea6-2c6f-4fb9-a33a-768462224416-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.601988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3190ea6-2c6f-4fb9-a33a-768462224416-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.602037 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3190ea6-2c6f-4fb9-a33a-768462224416-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.602161 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3190ea6-2c6f-4fb9-a33a-768462224416-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.602240 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phvwc\" (UniqueName: \"kubernetes.io/projected/b3190ea6-2c6f-4fb9-a33a-768462224416-kube-api-access-phvwc\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.602575 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.688461 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.703851 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.703904 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3190ea6-2c6f-4fb9-a33a-768462224416-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.703932 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3190ea6-2c6f-4fb9-a33a-768462224416-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.704001 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3190ea6-2c6f-4fb9-a33a-768462224416-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.704060 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3190ea6-2c6f-4fb9-a33a-768462224416-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.704082 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3190ea6-2c6f-4fb9-a33a-768462224416-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.704113 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3190ea6-2c6f-4fb9-a33a-768462224416-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.704138 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phvwc\" (UniqueName: \"kubernetes.io/projected/b3190ea6-2c6f-4fb9-a33a-768462224416-kube-api-access-phvwc\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.704347 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.704890 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3190ea6-2c6f-4fb9-a33a-768462224416-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.706017 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3190ea6-2c6f-4fb9-a33a-768462224416-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.712977 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3190ea6-2c6f-4fb9-a33a-768462224416-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.713111 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3190ea6-2c6f-4fb9-a33a-768462224416-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.715160 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3190ea6-2c6f-4fb9-a33a-768462224416-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.721820 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phvwc\" (UniqueName: \"kubernetes.io/projected/b3190ea6-2c6f-4fb9-a33a-768462224416-kube-api-access-phvwc\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.739054 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3190ea6-2c6f-4fb9-a33a-768462224416-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.764527 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3190ea6-2c6f-4fb9-a33a-768462224416\") " pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.805551 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-config-data-custom\") pod \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.805696 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-combined-ca-bundle\") pod \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.805773 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq76d\" (UniqueName: \"kubernetes.io/projected/2132fc2a-d11e-473a-b4ab-15c56ac5debf-kube-api-access-hq76d\") pod \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.806063 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-config-data\") pod \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\" (UID: \"2132fc2a-d11e-473a-b4ab-15c56ac5debf\") " Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.812380 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2132fc2a-d11e-473a-b4ab-15c56ac5debf-kube-api-access-hq76d" (OuterVolumeSpecName: "kube-api-access-hq76d") pod "2132fc2a-d11e-473a-b4ab-15c56ac5debf" (UID: "2132fc2a-d11e-473a-b4ab-15c56ac5debf"). InnerVolumeSpecName "kube-api-access-hq76d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.813397 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2132fc2a-d11e-473a-b4ab-15c56ac5debf" (UID: "2132fc2a-d11e-473a-b4ab-15c56ac5debf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.838387 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2132fc2a-d11e-473a-b4ab-15c56ac5debf" (UID: "2132fc2a-d11e-473a-b4ab-15c56ac5debf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.862870 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.886217 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-config-data" (OuterVolumeSpecName: "config-data") pod "2132fc2a-d11e-473a-b4ab-15c56ac5debf" (UID: "2132fc2a-d11e-473a-b4ab-15c56ac5debf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.909645 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.909675 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.909685 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2132fc2a-d11e-473a-b4ab-15c56ac5debf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:28 crc kubenswrapper[4832]: I1002 18:43:28.909694 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq76d\" (UniqueName: \"kubernetes.io/projected/2132fc2a-d11e-473a-b4ab-15c56ac5debf-kube-api-access-hq76d\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:29 crc kubenswrapper[4832]: I1002 18:43:29.005400 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 02 18:43:29 crc kubenswrapper[4832]: I1002 18:43:29.256245 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb31897d-9d37-446e-9cde-08d0e12fc428" path="/var/lib/kubelet/pods/eb31897d-9d37-446e-9cde-08d0e12fc428/volumes" Oct 02 18:43:29 crc kubenswrapper[4832]: I1002 18:43:29.410767 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-64677dc65c-wh4zf" event={"ID":"2132fc2a-d11e-473a-b4ab-15c56ac5debf","Type":"ContainerDied","Data":"934a3fb9fc592f1f05a1f0787f6260284c7be675f431e558e89bef8055ddab0b"} Oct 02 18:43:29 crc kubenswrapper[4832]: I1002 18:43:29.411126 4832 scope.go:117] "RemoveContainer" containerID="2a484137a645562083e85d0f06ef2a80f24c332fe468822f79d6ad993e6e3d32" Oct 02 18:43:29 crc kubenswrapper[4832]: I1002 18:43:29.410820 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-64677dc65c-wh4zf" Oct 02 18:43:29 crc kubenswrapper[4832]: I1002 18:43:29.458050 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-64677dc65c-wh4zf"] Oct 02 18:43:29 crc kubenswrapper[4832]: I1002 18:43:29.471360 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-64677dc65c-wh4zf"] Oct 02 18:43:29 crc kubenswrapper[4832]: I1002 18:43:29.560777 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 02 18:43:30 crc kubenswrapper[4832]: I1002 18:43:30.429549 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3190ea6-2c6f-4fb9-a33a-768462224416","Type":"ContainerStarted","Data":"48d8da7e4ab2960b6cd87b01813ea5a35e928f28184867529028985c06469697"} Oct 02 18:43:30 crc kubenswrapper[4832]: I1002 18:43:30.429916 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3190ea6-2c6f-4fb9-a33a-768462224416","Type":"ContainerStarted","Data":"dc9ac150d909040810357be4574b9af4a2de0111e58fd1038f3c03c312b45cfd"} Oct 02 18:43:31 crc kubenswrapper[4832]: I1002 18:43:31.237652 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2132fc2a-d11e-473a-b4ab-15c56ac5debf" path="/var/lib/kubelet/pods/2132fc2a-d11e-473a-b4ab-15c56ac5debf/volumes" Oct 02 18:43:31 crc kubenswrapper[4832]: I1002 18:43:31.444420 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3190ea6-2c6f-4fb9-a33a-768462224416","Type":"ContainerStarted","Data":"1d8d3cc647a461abab49e301ab4f7af34313052410af26df8fe9891cdf163e03"} Oct 02 18:43:31 crc kubenswrapper[4832]: I1002 18:43:31.480119 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.480098218 podStartE2EDuration="3.480098218s" podCreationTimestamp="2025-10-02 18:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:43:31.471938317 +0000 UTC m=+1368.441381189" watchObservedRunningTime="2025-10-02 18:43:31.480098218 +0000 UTC m=+1368.449541090" Oct 02 18:43:34 crc kubenswrapper[4832]: E1002 18:43:34.883309 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f776d71_e1a8_4fbb_b18b_eec4ad57f95d.slice/crio-d3fc5e95536063dc7410ec49164d9c0a3081797f156576c865d6913985a213fd.scope\": RecentStats: unable to find data in memory cache]" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.394236 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.470334 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-combined-ca-bundle\") pod \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.470418 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-config-data\") pod \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.470477 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-scripts\") pod \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.470509 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-sg-core-conf-yaml\") pod \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.470571 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-log-httpd\") pod \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.470669 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk276\" (UniqueName: \"kubernetes.io/projected/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-kube-api-access-mk276\") pod \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.470736 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-run-httpd\") pod \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\" (UID: \"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d\") " Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.471901 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" (UID: "2f776d71-e1a8-4fbb-b18b-eec4ad57f95d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.473462 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" (UID: "2f776d71-e1a8-4fbb-b18b-eec4ad57f95d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.477182 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-scripts" (OuterVolumeSpecName: "scripts") pod "2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" (UID: "2f776d71-e1a8-4fbb-b18b-eec4ad57f95d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.477535 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-kube-api-access-mk276" (OuterVolumeSpecName: "kube-api-access-mk276") pod "2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" (UID: "2f776d71-e1a8-4fbb-b18b-eec4ad57f95d"). InnerVolumeSpecName "kube-api-access-mk276". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.495759 4832 generic.go:334] "Generic (PLEG): container finished" podID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerID="d3fc5e95536063dc7410ec49164d9c0a3081797f156576c865d6913985a213fd" exitCode=137 Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.495811 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d","Type":"ContainerDied","Data":"d3fc5e95536063dc7410ec49164d9c0a3081797f156576c865d6913985a213fd"} Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.495842 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f776d71-e1a8-4fbb-b18b-eec4ad57f95d","Type":"ContainerDied","Data":"53622898bb4a458a26033464c7a6f9e13f33e2fb60820dd6d4e15ad90133abdf"} Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.495862 4832 scope.go:117] "RemoveContainer" containerID="d3fc5e95536063dc7410ec49164d9c0a3081797f156576c865d6913985a213fd" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.496040 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.511011 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" (UID: "2f776d71-e1a8-4fbb-b18b-eec4ad57f95d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.569475 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" (UID: "2f776d71-e1a8-4fbb-b18b-eec4ad57f95d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.573507 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.573537 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.573548 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.573559 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk276\" (UniqueName: \"kubernetes.io/projected/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-kube-api-access-mk276\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.573567 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.573575 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.621972 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-config-data" (OuterVolumeSpecName: "config-data") pod "2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" (UID: "2f776d71-e1a8-4fbb-b18b-eec4ad57f95d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.639041 4832 scope.go:117] "RemoveContainer" containerID="8a1721dc09acb6672a1d5fde834d67afaf385ae6fb3ef8d0b1da3a6ddf5ffad6" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.669333 4832 scope.go:117] "RemoveContainer" containerID="f55548eea6bb4a546c365d57e971114b76f0d5496a87dece782ab9620860d95e" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.675448 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.686957 4832 scope.go:117] "RemoveContainer" containerID="a62e6f0326571bd65e097950ceb1aa99028ab8581aa995b72a2fdcd24fe13bdc" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.710566 4832 scope.go:117] "RemoveContainer" containerID="d3fc5e95536063dc7410ec49164d9c0a3081797f156576c865d6913985a213fd" Oct 02 18:43:35 crc kubenswrapper[4832]: E1002 18:43:35.710952 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3fc5e95536063dc7410ec49164d9c0a3081797f156576c865d6913985a213fd\": container with ID starting with d3fc5e95536063dc7410ec49164d9c0a3081797f156576c865d6913985a213fd not found: ID does not exist" containerID="d3fc5e95536063dc7410ec49164d9c0a3081797f156576c865d6913985a213fd" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.710987 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3fc5e95536063dc7410ec49164d9c0a3081797f156576c865d6913985a213fd"} err="failed to get container status \"d3fc5e95536063dc7410ec49164d9c0a3081797f156576c865d6913985a213fd\": rpc error: code = NotFound desc = could not find container \"d3fc5e95536063dc7410ec49164d9c0a3081797f156576c865d6913985a213fd\": container with ID starting with d3fc5e95536063dc7410ec49164d9c0a3081797f156576c865d6913985a213fd not found: ID does not exist" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.711012 4832 scope.go:117] "RemoveContainer" containerID="8a1721dc09acb6672a1d5fde834d67afaf385ae6fb3ef8d0b1da3a6ddf5ffad6" Oct 02 18:43:35 crc kubenswrapper[4832]: E1002 18:43:35.711302 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a1721dc09acb6672a1d5fde834d67afaf385ae6fb3ef8d0b1da3a6ddf5ffad6\": container with ID starting with 8a1721dc09acb6672a1d5fde834d67afaf385ae6fb3ef8d0b1da3a6ddf5ffad6 not found: ID does not exist" containerID="8a1721dc09acb6672a1d5fde834d67afaf385ae6fb3ef8d0b1da3a6ddf5ffad6" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.711342 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a1721dc09acb6672a1d5fde834d67afaf385ae6fb3ef8d0b1da3a6ddf5ffad6"} err="failed to get container status \"8a1721dc09acb6672a1d5fde834d67afaf385ae6fb3ef8d0b1da3a6ddf5ffad6\": rpc error: code = NotFound desc = could not find container \"8a1721dc09acb6672a1d5fde834d67afaf385ae6fb3ef8d0b1da3a6ddf5ffad6\": container with ID starting with 8a1721dc09acb6672a1d5fde834d67afaf385ae6fb3ef8d0b1da3a6ddf5ffad6 not found: ID does not exist" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.711368 4832 scope.go:117] "RemoveContainer" containerID="f55548eea6bb4a546c365d57e971114b76f0d5496a87dece782ab9620860d95e" Oct 02 18:43:35 crc kubenswrapper[4832]: E1002 18:43:35.711780 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f55548eea6bb4a546c365d57e971114b76f0d5496a87dece782ab9620860d95e\": container with ID starting with f55548eea6bb4a546c365d57e971114b76f0d5496a87dece782ab9620860d95e not found: ID does not exist" containerID="f55548eea6bb4a546c365d57e971114b76f0d5496a87dece782ab9620860d95e" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.711803 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55548eea6bb4a546c365d57e971114b76f0d5496a87dece782ab9620860d95e"} err="failed to get container status \"f55548eea6bb4a546c365d57e971114b76f0d5496a87dece782ab9620860d95e\": rpc error: code = NotFound desc = could not find container \"f55548eea6bb4a546c365d57e971114b76f0d5496a87dece782ab9620860d95e\": container with ID starting with f55548eea6bb4a546c365d57e971114b76f0d5496a87dece782ab9620860d95e not found: ID does not exist" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.711818 4832 scope.go:117] "RemoveContainer" containerID="a62e6f0326571bd65e097950ceb1aa99028ab8581aa995b72a2fdcd24fe13bdc" Oct 02 18:43:35 crc kubenswrapper[4832]: E1002 18:43:35.712134 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a62e6f0326571bd65e097950ceb1aa99028ab8581aa995b72a2fdcd24fe13bdc\": container with ID starting with a62e6f0326571bd65e097950ceb1aa99028ab8581aa995b72a2fdcd24fe13bdc not found: ID does not exist" containerID="a62e6f0326571bd65e097950ceb1aa99028ab8581aa995b72a2fdcd24fe13bdc" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.712194 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a62e6f0326571bd65e097950ceb1aa99028ab8581aa995b72a2fdcd24fe13bdc"} err="failed to get container status \"a62e6f0326571bd65e097950ceb1aa99028ab8581aa995b72a2fdcd24fe13bdc\": rpc error: code = NotFound desc = could not find container \"a62e6f0326571bd65e097950ceb1aa99028ab8581aa995b72a2fdcd24fe13bdc\": container with ID starting with a62e6f0326571bd65e097950ceb1aa99028ab8581aa995b72a2fdcd24fe13bdc not found: ID does not exist" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.751207 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.751310 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.791228 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.814670 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.907938 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.917463 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.942291 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:43:35 crc kubenswrapper[4832]: E1002 18:43:35.943477 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="sg-core" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.943498 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="sg-core" Oct 02 18:43:35 crc kubenswrapper[4832]: E1002 18:43:35.943538 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="ceilometer-central-agent" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.943548 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="ceilometer-central-agent" Oct 02 18:43:35 crc kubenswrapper[4832]: E1002 18:43:35.943581 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="proxy-httpd" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.943588 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="proxy-httpd" Oct 02 18:43:35 crc kubenswrapper[4832]: E1002 18:43:35.943611 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="ceilometer-notification-agent" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.943617 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="ceilometer-notification-agent" Oct 02 18:43:35 crc kubenswrapper[4832]: E1002 18:43:35.943637 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2132fc2a-d11e-473a-b4ab-15c56ac5debf" containerName="heat-engine" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.943643 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2132fc2a-d11e-473a-b4ab-15c56ac5debf" containerName="heat-engine" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.944096 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="ceilometer-central-agent" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.944120 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="proxy-httpd" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.944135 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="sg-core" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.944152 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2132fc2a-d11e-473a-b4ab-15c56ac5debf" containerName="heat-engine" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.944180 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" containerName="ceilometer-notification-agent" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.955359 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.963171 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.963448 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.982970 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdxrc\" (UniqueName: \"kubernetes.io/projected/eb8e83be-8976-410c-908a-acdf8f18c10f-kube-api-access-kdxrc\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.983310 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8e83be-8976-410c-908a-acdf8f18c10f-log-httpd\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.983407 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.983549 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.984060 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-config-data\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.984156 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8e83be-8976-410c-908a-acdf8f18c10f-run-httpd\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.984620 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-scripts\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:35 crc kubenswrapper[4832]: I1002 18:43:35.995987 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.086880 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-scripts\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.086967 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdxrc\" (UniqueName: \"kubernetes.io/projected/eb8e83be-8976-410c-908a-acdf8f18c10f-kube-api-access-kdxrc\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.087000 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8e83be-8976-410c-908a-acdf8f18c10f-log-httpd\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.087027 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.087069 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.087120 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-config-data\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.087151 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8e83be-8976-410c-908a-acdf8f18c10f-run-httpd\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.087516 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8e83be-8976-410c-908a-acdf8f18c10f-run-httpd\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.088103 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8e83be-8976-410c-908a-acdf8f18c10f-log-httpd\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.092395 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-scripts\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.092572 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.093093 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-config-data\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.093552 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.112344 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdxrc\" (UniqueName: \"kubernetes.io/projected/eb8e83be-8976-410c-908a-acdf8f18c10f-kube-api-access-kdxrc\") pod \"ceilometer-0\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.277793 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.427385 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.499186 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-scripts\") pod \"ab3b380b-509b-4012-bbec-74d1dd95c048\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.499283 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-config-data\") pod \"ab3b380b-509b-4012-bbec-74d1dd95c048\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.499326 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3b380b-509b-4012-bbec-74d1dd95c048-logs\") pod \"ab3b380b-509b-4012-bbec-74d1dd95c048\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.499397 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab3b380b-509b-4012-bbec-74d1dd95c048-etc-machine-id\") pod \"ab3b380b-509b-4012-bbec-74d1dd95c048\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.499451 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-config-data-custom\") pod \"ab3b380b-509b-4012-bbec-74d1dd95c048\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.499530 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-combined-ca-bundle\") pod \"ab3b380b-509b-4012-bbec-74d1dd95c048\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.499550 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95vfm\" (UniqueName: \"kubernetes.io/projected/ab3b380b-509b-4012-bbec-74d1dd95c048-kube-api-access-95vfm\") pod \"ab3b380b-509b-4012-bbec-74d1dd95c048\" (UID: \"ab3b380b-509b-4012-bbec-74d1dd95c048\") " Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.500870 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab3b380b-509b-4012-bbec-74d1dd95c048-logs" (OuterVolumeSpecName: "logs") pod "ab3b380b-509b-4012-bbec-74d1dd95c048" (UID: "ab3b380b-509b-4012-bbec-74d1dd95c048"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.503997 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab3b380b-509b-4012-bbec-74d1dd95c048-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ab3b380b-509b-4012-bbec-74d1dd95c048" (UID: "ab3b380b-509b-4012-bbec-74d1dd95c048"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.506505 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-scripts" (OuterVolumeSpecName: "scripts") pod "ab3b380b-509b-4012-bbec-74d1dd95c048" (UID: "ab3b380b-509b-4012-bbec-74d1dd95c048"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.515495 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab3b380b-509b-4012-bbec-74d1dd95c048-kube-api-access-95vfm" (OuterVolumeSpecName: "kube-api-access-95vfm") pod "ab3b380b-509b-4012-bbec-74d1dd95c048" (UID: "ab3b380b-509b-4012-bbec-74d1dd95c048"). InnerVolumeSpecName "kube-api-access-95vfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.515589 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ab3b380b-509b-4012-bbec-74d1dd95c048" (UID: "ab3b380b-509b-4012-bbec-74d1dd95c048"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.520195 4832 generic.go:334] "Generic (PLEG): container finished" podID="ab3b380b-509b-4012-bbec-74d1dd95c048" containerID="b5409c85639be16258a48f07ab62fb22f93ce16923f6a42978876a25c374064c" exitCode=137 Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.520362 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ab3b380b-509b-4012-bbec-74d1dd95c048","Type":"ContainerDied","Data":"b5409c85639be16258a48f07ab62fb22f93ce16923f6a42978876a25c374064c"} Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.520400 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ab3b380b-509b-4012-bbec-74d1dd95c048","Type":"ContainerDied","Data":"880e1a04746c17a5b3120b4378835fa7c8343741357abac2f616532d39354ed5"} Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.520424 4832 scope.go:117] "RemoveContainer" containerID="b5409c85639be16258a48f07ab62fb22f93ce16923f6a42978876a25c374064c" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.520591 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.522811 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.522847 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.555588 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab3b380b-509b-4012-bbec-74d1dd95c048" (UID: "ab3b380b-509b-4012-bbec-74d1dd95c048"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.558851 4832 scope.go:117] "RemoveContainer" containerID="e372e92c81b2d7de7473348ce116710a4a33dfd86e8cba86c7990791f209fc32" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.566214 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-config-data" (OuterVolumeSpecName: "config-data") pod "ab3b380b-509b-4012-bbec-74d1dd95c048" (UID: "ab3b380b-509b-4012-bbec-74d1dd95c048"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.596114 4832 scope.go:117] "RemoveContainer" containerID="b5409c85639be16258a48f07ab62fb22f93ce16923f6a42978876a25c374064c" Oct 02 18:43:36 crc kubenswrapper[4832]: E1002 18:43:36.597034 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5409c85639be16258a48f07ab62fb22f93ce16923f6a42978876a25c374064c\": container with ID starting with b5409c85639be16258a48f07ab62fb22f93ce16923f6a42978876a25c374064c not found: ID does not exist" containerID="b5409c85639be16258a48f07ab62fb22f93ce16923f6a42978876a25c374064c" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.597091 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5409c85639be16258a48f07ab62fb22f93ce16923f6a42978876a25c374064c"} err="failed to get container status \"b5409c85639be16258a48f07ab62fb22f93ce16923f6a42978876a25c374064c\": rpc error: code = NotFound desc = could not find container \"b5409c85639be16258a48f07ab62fb22f93ce16923f6a42978876a25c374064c\": container with ID starting with b5409c85639be16258a48f07ab62fb22f93ce16923f6a42978876a25c374064c not found: ID does not exist" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.597128 4832 scope.go:117] "RemoveContainer" containerID="e372e92c81b2d7de7473348ce116710a4a33dfd86e8cba86c7990791f209fc32" Oct 02 18:43:36 crc kubenswrapper[4832]: E1002 18:43:36.597411 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e372e92c81b2d7de7473348ce116710a4a33dfd86e8cba86c7990791f209fc32\": container with ID starting with e372e92c81b2d7de7473348ce116710a4a33dfd86e8cba86c7990791f209fc32 not found: ID does not exist" containerID="e372e92c81b2d7de7473348ce116710a4a33dfd86e8cba86c7990791f209fc32" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.597442 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e372e92c81b2d7de7473348ce116710a4a33dfd86e8cba86c7990791f209fc32"} err="failed to get container status \"e372e92c81b2d7de7473348ce116710a4a33dfd86e8cba86c7990791f209fc32\": rpc error: code = NotFound desc = could not find container \"e372e92c81b2d7de7473348ce116710a4a33dfd86e8cba86c7990791f209fc32\": container with ID starting with e372e92c81b2d7de7473348ce116710a4a33dfd86e8cba86c7990791f209fc32 not found: ID does not exist" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.601668 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.601709 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95vfm\" (UniqueName: \"kubernetes.io/projected/ab3b380b-509b-4012-bbec-74d1dd95c048-kube-api-access-95vfm\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.601721 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.601729 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.601738 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab3b380b-509b-4012-bbec-74d1dd95c048-logs\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.601746 4832 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab3b380b-509b-4012-bbec-74d1dd95c048-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.601754 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab3b380b-509b-4012-bbec-74d1dd95c048-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.805036 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.897798 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.908571 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.923029 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 02 18:43:36 crc kubenswrapper[4832]: E1002 18:43:36.923587 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3b380b-509b-4012-bbec-74d1dd95c048" containerName="cinder-api-log" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.923607 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3b380b-509b-4012-bbec-74d1dd95c048" containerName="cinder-api-log" Oct 02 18:43:36 crc kubenswrapper[4832]: E1002 18:43:36.923631 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab3b380b-509b-4012-bbec-74d1dd95c048" containerName="cinder-api" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.923638 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab3b380b-509b-4012-bbec-74d1dd95c048" containerName="cinder-api" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.923847 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3b380b-509b-4012-bbec-74d1dd95c048" containerName="cinder-api" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.923875 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab3b380b-509b-4012-bbec-74d1dd95c048" containerName="cinder-api-log" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.925118 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.930323 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.930482 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.930634 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 02 18:43:36 crc kubenswrapper[4832]: I1002 18:43:36.948569 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.008313 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-config-data\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.008603 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.008869 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-scripts\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.008999 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/528718cd-4242-48d1-be69-6637022d4c84-etc-machine-id\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.009097 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/528718cd-4242-48d1-be69-6637022d4c84-logs\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.009213 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x4x9\" (UniqueName: \"kubernetes.io/projected/528718cd-4242-48d1-be69-6637022d4c84-kube-api-access-7x4x9\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.009352 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-public-tls-certs\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.009488 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.010229 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-config-data-custom\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.047341 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.111773 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-scripts\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.111844 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/528718cd-4242-48d1-be69-6637022d4c84-etc-machine-id\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.111873 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/528718cd-4242-48d1-be69-6637022d4c84-logs\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.111898 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x4x9\" (UniqueName: \"kubernetes.io/projected/528718cd-4242-48d1-be69-6637022d4c84-kube-api-access-7x4x9\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.111923 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-public-tls-certs\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.111984 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.112000 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-config-data-custom\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.112031 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-config-data\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.112064 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.112529 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/528718cd-4242-48d1-be69-6637022d4c84-etc-machine-id\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.112918 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/528718cd-4242-48d1-be69-6637022d4c84-logs\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.118171 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.118352 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.118513 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-scripts\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.118717 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-config-data\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.119587 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-public-tls-certs\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.122724 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/528718cd-4242-48d1-be69-6637022d4c84-config-data-custom\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.136992 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x4x9\" (UniqueName: \"kubernetes.io/projected/528718cd-4242-48d1-be69-6637022d4c84-kube-api-access-7x4x9\") pod \"cinder-api-0\" (UID: \"528718cd-4242-48d1-be69-6637022d4c84\") " pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.234050 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f776d71-e1a8-4fbb-b18b-eec4ad57f95d" path="/var/lib/kubelet/pods/2f776d71-e1a8-4fbb-b18b-eec4ad57f95d/volumes" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.234814 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab3b380b-509b-4012-bbec-74d1dd95c048" path="/var/lib/kubelet/pods/ab3b380b-509b-4012-bbec-74d1dd95c048/volumes" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.245446 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.540176 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8e83be-8976-410c-908a-acdf8f18c10f","Type":"ContainerStarted","Data":"7270f809757d8359d6824420b0099b2e60ea248acd031809084e719754acb892"} Oct 02 18:43:37 crc kubenswrapper[4832]: W1002 18:43:37.766147 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod528718cd_4242_48d1_be69_6637022d4c84.slice/crio-31751f8bf78009b2142e9d5d6985b46e6ed70b04e7d0811a180a0b3bd2322afa WatchSource:0}: Error finding container 31751f8bf78009b2142e9d5d6985b46e6ed70b04e7d0811a180a0b3bd2322afa: Status 404 returned error can't find the container with id 31751f8bf78009b2142e9d5d6985b46e6ed70b04e7d0811a180a0b3bd2322afa Oct 02 18:43:37 crc kubenswrapper[4832]: I1002 18:43:37.772540 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 02 18:43:38 crc kubenswrapper[4832]: I1002 18:43:38.554829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"528718cd-4242-48d1-be69-6637022d4c84","Type":"ContainerStarted","Data":"5286d5e97721bc480ee6accfea30a3ee6430cdd754077e280c0a2e5f19003ff8"} Oct 02 18:43:38 crc kubenswrapper[4832]: I1002 18:43:38.555337 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"528718cd-4242-48d1-be69-6637022d4c84","Type":"ContainerStarted","Data":"31751f8bf78009b2142e9d5d6985b46e6ed70b04e7d0811a180a0b3bd2322afa"} Oct 02 18:43:38 crc kubenswrapper[4832]: I1002 18:43:38.564251 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8e83be-8976-410c-908a-acdf8f18c10f","Type":"ContainerStarted","Data":"718a19f8eb83061abb577cf8a94e26cf4a265bf6fbb72e2c891304a5f1d519f1"} Oct 02 18:43:38 crc kubenswrapper[4832]: I1002 18:43:38.820356 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 02 18:43:38 crc kubenswrapper[4832]: I1002 18:43:38.820476 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 18:43:38 crc kubenswrapper[4832]: I1002 18:43:38.821326 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 02 18:43:38 crc kubenswrapper[4832]: I1002 18:43:38.868957 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 02 18:43:38 crc kubenswrapper[4832]: I1002 18:43:38.869306 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 02 18:43:38 crc kubenswrapper[4832]: I1002 18:43:38.965577 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 02 18:43:39 crc kubenswrapper[4832]: I1002 18:43:39.012672 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 02 18:43:39 crc kubenswrapper[4832]: I1002 18:43:39.591332 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8e83be-8976-410c-908a-acdf8f18c10f","Type":"ContainerStarted","Data":"91f7c42a4507de2a2b116a21a41bb691b3bd2cb38826a9fb75faadf5f48ac2e7"} Oct 02 18:43:39 crc kubenswrapper[4832]: I1002 18:43:39.591628 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8e83be-8976-410c-908a-acdf8f18c10f","Type":"ContainerStarted","Data":"c48f09443e03d635880441d1b56a47c6c4375e3352b91ed7f6a5c4d55b2977d9"} Oct 02 18:43:39 crc kubenswrapper[4832]: I1002 18:43:39.603293 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"528718cd-4242-48d1-be69-6637022d4c84","Type":"ContainerStarted","Data":"fd9b23db0911a61f1ac11cb0b5f81ab515a9ece575d2b31c67782d3380716a50"} Oct 02 18:43:39 crc kubenswrapper[4832]: I1002 18:43:39.604218 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 02 18:43:39 crc kubenswrapper[4832]: I1002 18:43:39.604239 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 02 18:43:39 crc kubenswrapper[4832]: I1002 18:43:39.604248 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 02 18:43:39 crc kubenswrapper[4832]: I1002 18:43:39.641758 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.6417351 podStartE2EDuration="3.6417351s" podCreationTimestamp="2025-10-02 18:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:43:39.62579047 +0000 UTC m=+1376.595233342" watchObservedRunningTime="2025-10-02 18:43:39.6417351 +0000 UTC m=+1376.611177972" Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.567546 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2c18-account-create-wmc9q"] Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.569375 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2c18-account-create-wmc9q" Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.573462 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.584827 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2c18-account-create-wmc9q"] Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.695666 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkkp5\" (UniqueName: \"kubernetes.io/projected/36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1-kube-api-access-tkkp5\") pod \"nova-api-2c18-account-create-wmc9q\" (UID: \"36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1\") " pod="openstack/nova-api-2c18-account-create-wmc9q" Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.764730 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-3b08-account-create-klnrc"] Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.766219 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3b08-account-create-klnrc" Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.768508 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.778134 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3b08-account-create-klnrc"] Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.797808 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkkp5\" (UniqueName: \"kubernetes.io/projected/36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1-kube-api-access-tkkp5\") pod \"nova-api-2c18-account-create-wmc9q\" (UID: \"36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1\") " pod="openstack/nova-api-2c18-account-create-wmc9q" Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.820893 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkkp5\" (UniqueName: \"kubernetes.io/projected/36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1-kube-api-access-tkkp5\") pod \"nova-api-2c18-account-create-wmc9q\" (UID: \"36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1\") " pod="openstack/nova-api-2c18-account-create-wmc9q" Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.900382 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8j26\" (UniqueName: \"kubernetes.io/projected/55b79da8-3130-4a24-b34e-5179d295a543-kube-api-access-t8j26\") pod \"nova-cell0-3b08-account-create-klnrc\" (UID: \"55b79da8-3130-4a24-b34e-5179d295a543\") " pod="openstack/nova-cell0-3b08-account-create-klnrc" Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.904698 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2c18-account-create-wmc9q" Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.973299 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5495-account-create-cfhsk"] Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.974818 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5495-account-create-cfhsk" Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.977258 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 02 18:43:40 crc kubenswrapper[4832]: I1002 18:43:40.982627 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5495-account-create-cfhsk"] Oct 02 18:43:41 crc kubenswrapper[4832]: I1002 18:43:41.003709 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8j26\" (UniqueName: \"kubernetes.io/projected/55b79da8-3130-4a24-b34e-5179d295a543-kube-api-access-t8j26\") pod \"nova-cell0-3b08-account-create-klnrc\" (UID: \"55b79da8-3130-4a24-b34e-5179d295a543\") " pod="openstack/nova-cell0-3b08-account-create-klnrc" Oct 02 18:43:41 crc kubenswrapper[4832]: I1002 18:43:41.031959 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8j26\" (UniqueName: \"kubernetes.io/projected/55b79da8-3130-4a24-b34e-5179d295a543-kube-api-access-t8j26\") pod \"nova-cell0-3b08-account-create-klnrc\" (UID: \"55b79da8-3130-4a24-b34e-5179d295a543\") " pod="openstack/nova-cell0-3b08-account-create-klnrc" Oct 02 18:43:41 crc kubenswrapper[4832]: I1002 18:43:41.088810 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3b08-account-create-klnrc" Oct 02 18:43:41 crc kubenswrapper[4832]: I1002 18:43:41.110940 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j89hr\" (UniqueName: \"kubernetes.io/projected/e19f6233-ad7b-418c-a4a0-b2ffaafbbb52-kube-api-access-j89hr\") pod \"nova-cell1-5495-account-create-cfhsk\" (UID: \"e19f6233-ad7b-418c-a4a0-b2ffaafbbb52\") " pod="openstack/nova-cell1-5495-account-create-cfhsk" Oct 02 18:43:41 crc kubenswrapper[4832]: I1002 18:43:41.214628 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j89hr\" (UniqueName: \"kubernetes.io/projected/e19f6233-ad7b-418c-a4a0-b2ffaafbbb52-kube-api-access-j89hr\") pod \"nova-cell1-5495-account-create-cfhsk\" (UID: \"e19f6233-ad7b-418c-a4a0-b2ffaafbbb52\") " pod="openstack/nova-cell1-5495-account-create-cfhsk" Oct 02 18:43:41 crc kubenswrapper[4832]: I1002 18:43:41.250109 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j89hr\" (UniqueName: \"kubernetes.io/projected/e19f6233-ad7b-418c-a4a0-b2ffaafbbb52-kube-api-access-j89hr\") pod \"nova-cell1-5495-account-create-cfhsk\" (UID: \"e19f6233-ad7b-418c-a4a0-b2ffaafbbb52\") " pod="openstack/nova-cell1-5495-account-create-cfhsk" Oct 02 18:43:41 crc kubenswrapper[4832]: I1002 18:43:41.526779 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5495-account-create-cfhsk" Oct 02 18:43:41 crc kubenswrapper[4832]: I1002 18:43:41.576715 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2c18-account-create-wmc9q"] Oct 02 18:43:41 crc kubenswrapper[4832]: W1002 18:43:41.582014 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36c5c7b1_24e3_4c46_8b92_308ce6e4fbc1.slice/crio-bb555e018b034926b63cc731710e18d78b3f13fae33f4da75777e2873362d04e WatchSource:0}: Error finding container bb555e018b034926b63cc731710e18d78b3f13fae33f4da75777e2873362d04e: Status 404 returned error can't find the container with id bb555e018b034926b63cc731710e18d78b3f13fae33f4da75777e2873362d04e Oct 02 18:43:41 crc kubenswrapper[4832]: I1002 18:43:41.632136 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2c18-account-create-wmc9q" event={"ID":"36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1","Type":"ContainerStarted","Data":"bb555e018b034926b63cc731710e18d78b3f13fae33f4da75777e2873362d04e"} Oct 02 18:43:41 crc kubenswrapper[4832]: I1002 18:43:41.632199 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 18:43:41 crc kubenswrapper[4832]: I1002 18:43:41.632209 4832 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 02 18:43:41 crc kubenswrapper[4832]: I1002 18:43:41.898497 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3b08-account-create-klnrc"] Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.073064 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.118095 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5495-account-create-cfhsk"] Oct 02 18:43:42 crc kubenswrapper[4832]: W1002 18:43:42.118344 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode19f6233_ad7b_418c_a4a0_b2ffaafbbb52.slice/crio-609af1518d853ed6ee5c69a4e15a69b3a5cfb030e35c057bc9e6958dc707d3a6 WatchSource:0}: Error finding container 609af1518d853ed6ee5c69a4e15a69b3a5cfb030e35c057bc9e6958dc707d3a6: Status 404 returned error can't find the container with id 609af1518d853ed6ee5c69a4e15a69b3a5cfb030e35c057bc9e6958dc707d3a6 Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.144982 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.644896 4832 generic.go:334] "Generic (PLEG): container finished" podID="e19f6233-ad7b-418c-a4a0-b2ffaafbbb52" containerID="fb886f1bc09a12f7c592ad96f6440e96e7c85227bc25af979d9de0be19dc2609" exitCode=0 Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.645035 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5495-account-create-cfhsk" event={"ID":"e19f6233-ad7b-418c-a4a0-b2ffaafbbb52","Type":"ContainerDied","Data":"fb886f1bc09a12f7c592ad96f6440e96e7c85227bc25af979d9de0be19dc2609"} Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.645212 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5495-account-create-cfhsk" event={"ID":"e19f6233-ad7b-418c-a4a0-b2ffaafbbb52","Type":"ContainerStarted","Data":"609af1518d853ed6ee5c69a4e15a69b3a5cfb030e35c057bc9e6958dc707d3a6"} Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.650567 4832 generic.go:334] "Generic (PLEG): container finished" podID="55b79da8-3130-4a24-b34e-5179d295a543" containerID="f8dcc0e4f8736fa2c15c4f860f5f0bac26a9032f1666e6ba33de830709a7802c" exitCode=0 Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.650672 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3b08-account-create-klnrc" event={"ID":"55b79da8-3130-4a24-b34e-5179d295a543","Type":"ContainerDied","Data":"f8dcc0e4f8736fa2c15c4f860f5f0bac26a9032f1666e6ba33de830709a7802c"} Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.650695 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3b08-account-create-klnrc" event={"ID":"55b79da8-3130-4a24-b34e-5179d295a543","Type":"ContainerStarted","Data":"ca7ba884ac3a1c0ea6b10f0851a6f5c16f9e6edff8af1eab3d14d54162963669"} Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.654772 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8e83be-8976-410c-908a-acdf8f18c10f","Type":"ContainerStarted","Data":"13d674cb7e84d97e8df7e3e293c8e86a0847e73d739d94c08a8efc2b50e5d7a8"} Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.655142 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="sg-core" containerID="cri-o://91f7c42a4507de2a2b116a21a41bb691b3bd2cb38826a9fb75faadf5f48ac2e7" gracePeriod=30 Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.655316 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="proxy-httpd" containerID="cri-o://13d674cb7e84d97e8df7e3e293c8e86a0847e73d739d94c08a8efc2b50e5d7a8" gracePeriod=30 Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.655337 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="ceilometer-notification-agent" containerID="cri-o://c48f09443e03d635880441d1b56a47c6c4375e3352b91ed7f6a5c4d55b2977d9" gracePeriod=30 Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.655158 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.655542 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="ceilometer-central-agent" containerID="cri-o://718a19f8eb83061abb577cf8a94e26cf4a265bf6fbb72e2c891304a5f1d519f1" gracePeriod=30 Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.659369 4832 generic.go:334] "Generic (PLEG): container finished" podID="36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1" containerID="c174d68e70894850142250ba6fd6ecfd51a8f72b8fcee769e8f043eab010dabf" exitCode=0 Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.659543 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2c18-account-create-wmc9q" event={"ID":"36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1","Type":"ContainerDied","Data":"c174d68e70894850142250ba6fd6ecfd51a8f72b8fcee769e8f043eab010dabf"} Oct 02 18:43:42 crc kubenswrapper[4832]: I1002 18:43:42.699054 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8839927359999997 podStartE2EDuration="7.699029558s" podCreationTimestamp="2025-10-02 18:43:35 +0000 UTC" firstStartedPulling="2025-10-02 18:43:36.812886151 +0000 UTC m=+1373.782329023" lastFinishedPulling="2025-10-02 18:43:41.627922973 +0000 UTC m=+1378.597365845" observedRunningTime="2025-10-02 18:43:42.689764823 +0000 UTC m=+1379.659207715" watchObservedRunningTime="2025-10-02 18:43:42.699029558 +0000 UTC m=+1379.668472440" Oct 02 18:43:43 crc kubenswrapper[4832]: I1002 18:43:43.675009 4832 generic.go:334] "Generic (PLEG): container finished" podID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerID="13d674cb7e84d97e8df7e3e293c8e86a0847e73d739d94c08a8efc2b50e5d7a8" exitCode=0 Oct 02 18:43:43 crc kubenswrapper[4832]: I1002 18:43:43.675660 4832 generic.go:334] "Generic (PLEG): container finished" podID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerID="91f7c42a4507de2a2b116a21a41bb691b3bd2cb38826a9fb75faadf5f48ac2e7" exitCode=2 Oct 02 18:43:43 crc kubenswrapper[4832]: I1002 18:43:43.675675 4832 generic.go:334] "Generic (PLEG): container finished" podID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerID="c48f09443e03d635880441d1b56a47c6c4375e3352b91ed7f6a5c4d55b2977d9" exitCode=0 Oct 02 18:43:43 crc kubenswrapper[4832]: I1002 18:43:43.675158 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8e83be-8976-410c-908a-acdf8f18c10f","Type":"ContainerDied","Data":"13d674cb7e84d97e8df7e3e293c8e86a0847e73d739d94c08a8efc2b50e5d7a8"} Oct 02 18:43:43 crc kubenswrapper[4832]: I1002 18:43:43.675912 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8e83be-8976-410c-908a-acdf8f18c10f","Type":"ContainerDied","Data":"91f7c42a4507de2a2b116a21a41bb691b3bd2cb38826a9fb75faadf5f48ac2e7"} Oct 02 18:43:43 crc kubenswrapper[4832]: I1002 18:43:43.675986 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8e83be-8976-410c-908a-acdf8f18c10f","Type":"ContainerDied","Data":"c48f09443e03d635880441d1b56a47c6c4375e3352b91ed7f6a5c4d55b2977d9"} Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.449762 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2c18-account-create-wmc9q" Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.518554 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkkp5\" (UniqueName: \"kubernetes.io/projected/36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1-kube-api-access-tkkp5\") pod \"36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1\" (UID: \"36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1\") " Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.526423 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1-kube-api-access-tkkp5" (OuterVolumeSpecName: "kube-api-access-tkkp5") pod "36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1" (UID: "36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1"). InnerVolumeSpecName "kube-api-access-tkkp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.621824 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkkp5\" (UniqueName: \"kubernetes.io/projected/36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1-kube-api-access-tkkp5\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.669724 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3b08-account-create-klnrc" Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.681038 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5495-account-create-cfhsk" Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.687622 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2c18-account-create-wmc9q" Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.687614 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2c18-account-create-wmc9q" event={"ID":"36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1","Type":"ContainerDied","Data":"bb555e018b034926b63cc731710e18d78b3f13fae33f4da75777e2873362d04e"} Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.687766 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb555e018b034926b63cc731710e18d78b3f13fae33f4da75777e2873362d04e" Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.689889 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5495-account-create-cfhsk" event={"ID":"e19f6233-ad7b-418c-a4a0-b2ffaafbbb52","Type":"ContainerDied","Data":"609af1518d853ed6ee5c69a4e15a69b3a5cfb030e35c057bc9e6958dc707d3a6"} Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.689908 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5495-account-create-cfhsk" Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.689924 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="609af1518d853ed6ee5c69a4e15a69b3a5cfb030e35c057bc9e6958dc707d3a6" Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.693917 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3b08-account-create-klnrc" event={"ID":"55b79da8-3130-4a24-b34e-5179d295a543","Type":"ContainerDied","Data":"ca7ba884ac3a1c0ea6b10f0851a6f5c16f9e6edff8af1eab3d14d54162963669"} Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.693961 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca7ba884ac3a1c0ea6b10f0851a6f5c16f9e6edff8af1eab3d14d54162963669" Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.694039 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3b08-account-create-klnrc" Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.724094 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j89hr\" (UniqueName: \"kubernetes.io/projected/e19f6233-ad7b-418c-a4a0-b2ffaafbbb52-kube-api-access-j89hr\") pod \"e19f6233-ad7b-418c-a4a0-b2ffaafbbb52\" (UID: \"e19f6233-ad7b-418c-a4a0-b2ffaafbbb52\") " Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.724171 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8j26\" (UniqueName: \"kubernetes.io/projected/55b79da8-3130-4a24-b34e-5179d295a543-kube-api-access-t8j26\") pod \"55b79da8-3130-4a24-b34e-5179d295a543\" (UID: \"55b79da8-3130-4a24-b34e-5179d295a543\") " Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.732696 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e19f6233-ad7b-418c-a4a0-b2ffaafbbb52-kube-api-access-j89hr" (OuterVolumeSpecName: "kube-api-access-j89hr") pod "e19f6233-ad7b-418c-a4a0-b2ffaafbbb52" (UID: "e19f6233-ad7b-418c-a4a0-b2ffaafbbb52"). InnerVolumeSpecName "kube-api-access-j89hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.734159 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b79da8-3130-4a24-b34e-5179d295a543-kube-api-access-t8j26" (OuterVolumeSpecName: "kube-api-access-t8j26") pod "55b79da8-3130-4a24-b34e-5179d295a543" (UID: "55b79da8-3130-4a24-b34e-5179d295a543"). InnerVolumeSpecName "kube-api-access-t8j26". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.826479 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j89hr\" (UniqueName: \"kubernetes.io/projected/e19f6233-ad7b-418c-a4a0-b2ffaafbbb52-kube-api-access-j89hr\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:44 crc kubenswrapper[4832]: I1002 18:43:44.826516 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8j26\" (UniqueName: \"kubernetes.io/projected/55b79da8-3130-4a24-b34e-5179d295a543-kube-api-access-t8j26\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:45 crc kubenswrapper[4832]: E1002 18:43:45.195210 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode19f6233_ad7b_418c_a4a0_b2ffaafbbb52.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55b79da8_3130_4a24_b34e_5179d295a543.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55b79da8_3130_4a24_b34e_5179d295a543.slice/crio-ca7ba884ac3a1c0ea6b10f0851a6f5c16f9e6edff8af1eab3d14d54162963669\": RecentStats: unable to find data in memory cache]" Oct 02 18:43:45 crc kubenswrapper[4832]: I1002 18:43:45.272076 4832 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod86aa56ca-c6e9-4382-a9aa-fea6afc94ade"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod86aa56ca-c6e9-4382-a9aa-fea6afc94ade] : Timed out while waiting for systemd to remove kubepods-besteffort-pod86aa56ca_c6e9_4382_a9aa_fea6afc94ade.slice" Oct 02 18:43:45 crc kubenswrapper[4832]: E1002 18:43:45.272150 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod86aa56ca-c6e9-4382-a9aa-fea6afc94ade] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod86aa56ca-c6e9-4382-a9aa-fea6afc94ade] : Timed out while waiting for systemd to remove kubepods-besteffort-pod86aa56ca_c6e9_4382_a9aa_fea6afc94ade.slice" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" podUID="86aa56ca-c6e9-4382-a9aa-fea6afc94ade" Oct 02 18:43:45 crc kubenswrapper[4832]: I1002 18:43:45.705526 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-pk84v" Oct 02 18:43:45 crc kubenswrapper[4832]: I1002 18:43:45.750307 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pk84v"] Oct 02 18:43:45 crc kubenswrapper[4832]: I1002 18:43:45.759867 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-pk84v"] Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.007502 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5lmj2"] Oct 02 18:43:46 crc kubenswrapper[4832]: E1002 18:43:46.008142 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e19f6233-ad7b-418c-a4a0-b2ffaafbbb52" containerName="mariadb-account-create" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.008166 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e19f6233-ad7b-418c-a4a0-b2ffaafbbb52" containerName="mariadb-account-create" Oct 02 18:43:46 crc kubenswrapper[4832]: E1002 18:43:46.008200 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1" containerName="mariadb-account-create" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.008210 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1" containerName="mariadb-account-create" Oct 02 18:43:46 crc kubenswrapper[4832]: E1002 18:43:46.008223 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b79da8-3130-4a24-b34e-5179d295a543" containerName="mariadb-account-create" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.008231 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b79da8-3130-4a24-b34e-5179d295a543" containerName="mariadb-account-create" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.008490 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b79da8-3130-4a24-b34e-5179d295a543" containerName="mariadb-account-create" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.008525 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e19f6233-ad7b-418c-a4a0-b2ffaafbbb52" containerName="mariadb-account-create" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.008542 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1" containerName="mariadb-account-create" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.009554 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.012468 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.012715 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cwtfs" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.013789 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.021969 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5lmj2"] Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.055601 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s2hc\" (UniqueName: \"kubernetes.io/projected/e36d9377-cbc4-4760-ae43-3065dfe614fe-kube-api-access-5s2hc\") pod \"nova-cell0-conductor-db-sync-5lmj2\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.055662 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-scripts\") pod \"nova-cell0-conductor-db-sync-5lmj2\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.055923 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-config-data\") pod \"nova-cell0-conductor-db-sync-5lmj2\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.056009 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5lmj2\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.157580 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s2hc\" (UniqueName: \"kubernetes.io/projected/e36d9377-cbc4-4760-ae43-3065dfe614fe-kube-api-access-5s2hc\") pod \"nova-cell0-conductor-db-sync-5lmj2\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.157655 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-scripts\") pod \"nova-cell0-conductor-db-sync-5lmj2\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.157755 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-config-data\") pod \"nova-cell0-conductor-db-sync-5lmj2\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.157802 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5lmj2\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.163697 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-scripts\") pod \"nova-cell0-conductor-db-sync-5lmj2\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.164441 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-config-data\") pod \"nova-cell0-conductor-db-sync-5lmj2\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.180223 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5lmj2\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.212925 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s2hc\" (UniqueName: \"kubernetes.io/projected/e36d9377-cbc4-4760-ae43-3065dfe614fe-kube-api-access-5s2hc\") pod \"nova-cell0-conductor-db-sync-5lmj2\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.353363 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.501222 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.596762 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-scripts\") pod \"eb8e83be-8976-410c-908a-acdf8f18c10f\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.596871 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-combined-ca-bundle\") pod \"eb8e83be-8976-410c-908a-acdf8f18c10f\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.596998 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-config-data\") pod \"eb8e83be-8976-410c-908a-acdf8f18c10f\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.597530 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-sg-core-conf-yaml\") pod \"eb8e83be-8976-410c-908a-acdf8f18c10f\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.597602 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdxrc\" (UniqueName: \"kubernetes.io/projected/eb8e83be-8976-410c-908a-acdf8f18c10f-kube-api-access-kdxrc\") pod \"eb8e83be-8976-410c-908a-acdf8f18c10f\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.597680 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8e83be-8976-410c-908a-acdf8f18c10f-log-httpd\") pod \"eb8e83be-8976-410c-908a-acdf8f18c10f\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.597701 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8e83be-8976-410c-908a-acdf8f18c10f-run-httpd\") pod \"eb8e83be-8976-410c-908a-acdf8f18c10f\" (UID: \"eb8e83be-8976-410c-908a-acdf8f18c10f\") " Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.598780 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8e83be-8976-410c-908a-acdf8f18c10f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eb8e83be-8976-410c-908a-acdf8f18c10f" (UID: "eb8e83be-8976-410c-908a-acdf8f18c10f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.604078 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8e83be-8976-410c-908a-acdf8f18c10f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eb8e83be-8976-410c-908a-acdf8f18c10f" (UID: "eb8e83be-8976-410c-908a-acdf8f18c10f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.615572 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb8e83be-8976-410c-908a-acdf8f18c10f-kube-api-access-kdxrc" (OuterVolumeSpecName: "kube-api-access-kdxrc") pod "eb8e83be-8976-410c-908a-acdf8f18c10f" (UID: "eb8e83be-8976-410c-908a-acdf8f18c10f"). InnerVolumeSpecName "kube-api-access-kdxrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.619408 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-scripts" (OuterVolumeSpecName: "scripts") pod "eb8e83be-8976-410c-908a-acdf8f18c10f" (UID: "eb8e83be-8976-410c-908a-acdf8f18c10f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.656979 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eb8e83be-8976-410c-908a-acdf8f18c10f" (UID: "eb8e83be-8976-410c-908a-acdf8f18c10f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.702020 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.702049 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdxrc\" (UniqueName: \"kubernetes.io/projected/eb8e83be-8976-410c-908a-acdf8f18c10f-kube-api-access-kdxrc\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.702061 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8e83be-8976-410c-908a-acdf8f18c10f-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.702072 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb8e83be-8976-410c-908a-acdf8f18c10f-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.702082 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.730678 4832 generic.go:334] "Generic (PLEG): container finished" podID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerID="718a19f8eb83061abb577cf8a94e26cf4a265bf6fbb72e2c891304a5f1d519f1" exitCode=0 Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.730730 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8e83be-8976-410c-908a-acdf8f18c10f","Type":"ContainerDied","Data":"718a19f8eb83061abb577cf8a94e26cf4a265bf6fbb72e2c891304a5f1d519f1"} Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.730761 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb8e83be-8976-410c-908a-acdf8f18c10f","Type":"ContainerDied","Data":"7270f809757d8359d6824420b0099b2e60ea248acd031809084e719754acb892"} Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.730784 4832 scope.go:117] "RemoveContainer" containerID="13d674cb7e84d97e8df7e3e293c8e86a0847e73d739d94c08a8efc2b50e5d7a8" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.730974 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.757660 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb8e83be-8976-410c-908a-acdf8f18c10f" (UID: "eb8e83be-8976-410c-908a-acdf8f18c10f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.802225 4832 scope.go:117] "RemoveContainer" containerID="91f7c42a4507de2a2b116a21a41bb691b3bd2cb38826a9fb75faadf5f48ac2e7" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.803764 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.827787 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-config-data" (OuterVolumeSpecName: "config-data") pod "eb8e83be-8976-410c-908a-acdf8f18c10f" (UID: "eb8e83be-8976-410c-908a-acdf8f18c10f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.844303 4832 scope.go:117] "RemoveContainer" containerID="c48f09443e03d635880441d1b56a47c6c4375e3352b91ed7f6a5c4d55b2977d9" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.850316 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5lmj2"] Oct 02 18:43:46 crc kubenswrapper[4832]: W1002 18:43:46.853237 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode36d9377_cbc4_4760_ae43_3065dfe614fe.slice/crio-f3de13ca3f4f9bd4e1900055b01a33f94ec033e21a24ac037642d9badcef9a9d WatchSource:0}: Error finding container f3de13ca3f4f9bd4e1900055b01a33f94ec033e21a24ac037642d9badcef9a9d: Status 404 returned error can't find the container with id f3de13ca3f4f9bd4e1900055b01a33f94ec033e21a24ac037642d9badcef9a9d Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.867138 4832 scope.go:117] "RemoveContainer" containerID="718a19f8eb83061abb577cf8a94e26cf4a265bf6fbb72e2c891304a5f1d519f1" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.897709 4832 scope.go:117] "RemoveContainer" containerID="13d674cb7e84d97e8df7e3e293c8e86a0847e73d739d94c08a8efc2b50e5d7a8" Oct 02 18:43:46 crc kubenswrapper[4832]: E1002 18:43:46.898246 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13d674cb7e84d97e8df7e3e293c8e86a0847e73d739d94c08a8efc2b50e5d7a8\": container with ID starting with 13d674cb7e84d97e8df7e3e293c8e86a0847e73d739d94c08a8efc2b50e5d7a8 not found: ID does not exist" containerID="13d674cb7e84d97e8df7e3e293c8e86a0847e73d739d94c08a8efc2b50e5d7a8" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.898685 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13d674cb7e84d97e8df7e3e293c8e86a0847e73d739d94c08a8efc2b50e5d7a8"} err="failed to get container status \"13d674cb7e84d97e8df7e3e293c8e86a0847e73d739d94c08a8efc2b50e5d7a8\": rpc error: code = NotFound desc = could not find container \"13d674cb7e84d97e8df7e3e293c8e86a0847e73d739d94c08a8efc2b50e5d7a8\": container with ID starting with 13d674cb7e84d97e8df7e3e293c8e86a0847e73d739d94c08a8efc2b50e5d7a8 not found: ID does not exist" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.898712 4832 scope.go:117] "RemoveContainer" containerID="91f7c42a4507de2a2b116a21a41bb691b3bd2cb38826a9fb75faadf5f48ac2e7" Oct 02 18:43:46 crc kubenswrapper[4832]: E1002 18:43:46.899242 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91f7c42a4507de2a2b116a21a41bb691b3bd2cb38826a9fb75faadf5f48ac2e7\": container with ID starting with 91f7c42a4507de2a2b116a21a41bb691b3bd2cb38826a9fb75faadf5f48ac2e7 not found: ID does not exist" containerID="91f7c42a4507de2a2b116a21a41bb691b3bd2cb38826a9fb75faadf5f48ac2e7" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.899289 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91f7c42a4507de2a2b116a21a41bb691b3bd2cb38826a9fb75faadf5f48ac2e7"} err="failed to get container status \"91f7c42a4507de2a2b116a21a41bb691b3bd2cb38826a9fb75faadf5f48ac2e7\": rpc error: code = NotFound desc = could not find container \"91f7c42a4507de2a2b116a21a41bb691b3bd2cb38826a9fb75faadf5f48ac2e7\": container with ID starting with 91f7c42a4507de2a2b116a21a41bb691b3bd2cb38826a9fb75faadf5f48ac2e7 not found: ID does not exist" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.899478 4832 scope.go:117] "RemoveContainer" containerID="c48f09443e03d635880441d1b56a47c6c4375e3352b91ed7f6a5c4d55b2977d9" Oct 02 18:43:46 crc kubenswrapper[4832]: E1002 18:43:46.900136 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c48f09443e03d635880441d1b56a47c6c4375e3352b91ed7f6a5c4d55b2977d9\": container with ID starting with c48f09443e03d635880441d1b56a47c6c4375e3352b91ed7f6a5c4d55b2977d9 not found: ID does not exist" containerID="c48f09443e03d635880441d1b56a47c6c4375e3352b91ed7f6a5c4d55b2977d9" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.900166 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c48f09443e03d635880441d1b56a47c6c4375e3352b91ed7f6a5c4d55b2977d9"} err="failed to get container status \"c48f09443e03d635880441d1b56a47c6c4375e3352b91ed7f6a5c4d55b2977d9\": rpc error: code = NotFound desc = could not find container \"c48f09443e03d635880441d1b56a47c6c4375e3352b91ed7f6a5c4d55b2977d9\": container with ID starting with c48f09443e03d635880441d1b56a47c6c4375e3352b91ed7f6a5c4d55b2977d9 not found: ID does not exist" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.900183 4832 scope.go:117] "RemoveContainer" containerID="718a19f8eb83061abb577cf8a94e26cf4a265bf6fbb72e2c891304a5f1d519f1" Oct 02 18:43:46 crc kubenswrapper[4832]: E1002 18:43:46.900466 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"718a19f8eb83061abb577cf8a94e26cf4a265bf6fbb72e2c891304a5f1d519f1\": container with ID starting with 718a19f8eb83061abb577cf8a94e26cf4a265bf6fbb72e2c891304a5f1d519f1 not found: ID does not exist" containerID="718a19f8eb83061abb577cf8a94e26cf4a265bf6fbb72e2c891304a5f1d519f1" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.900493 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"718a19f8eb83061abb577cf8a94e26cf4a265bf6fbb72e2c891304a5f1d519f1"} err="failed to get container status \"718a19f8eb83061abb577cf8a94e26cf4a265bf6fbb72e2c891304a5f1d519f1\": rpc error: code = NotFound desc = could not find container \"718a19f8eb83061abb577cf8a94e26cf4a265bf6fbb72e2c891304a5f1d519f1\": container with ID starting with 718a19f8eb83061abb577cf8a94e26cf4a265bf6fbb72e2c891304a5f1d519f1 not found: ID does not exist" Oct 02 18:43:46 crc kubenswrapper[4832]: I1002 18:43:46.905433 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb8e83be-8976-410c-908a-acdf8f18c10f-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.068088 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.080631 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.096228 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:43:47 crc kubenswrapper[4832]: E1002 18:43:47.096693 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="proxy-httpd" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.096711 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="proxy-httpd" Oct 02 18:43:47 crc kubenswrapper[4832]: E1002 18:43:47.096720 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="ceilometer-notification-agent" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.096726 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="ceilometer-notification-agent" Oct 02 18:43:47 crc kubenswrapper[4832]: E1002 18:43:47.096740 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="ceilometer-central-agent" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.096747 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="ceilometer-central-agent" Oct 02 18:43:47 crc kubenswrapper[4832]: E1002 18:43:47.096753 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="sg-core" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.096758 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="sg-core" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.096960 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="sg-core" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.096985 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="ceilometer-notification-agent" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.096996 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="ceilometer-central-agent" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.097007 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" containerName="proxy-httpd" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.098912 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.101791 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.106938 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.109180 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72hdm\" (UniqueName: \"kubernetes.io/projected/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-kube-api-access-72hdm\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.109290 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-log-httpd\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.109311 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.109336 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-scripts\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.109360 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.109392 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-run-httpd\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.109437 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-config-data\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.115280 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.211377 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-log-httpd\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.211419 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.211593 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-scripts\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.211650 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.211771 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-log-httpd\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.211791 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-run-httpd\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.212143 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-run-httpd\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.212337 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-config-data\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.212567 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72hdm\" (UniqueName: \"kubernetes.io/projected/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-kube-api-access-72hdm\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.219125 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.222641 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.223230 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-config-data\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.223593 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-scripts\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.231522 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72hdm\" (UniqueName: \"kubernetes.io/projected/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-kube-api-access-72hdm\") pod \"ceilometer-0\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.238664 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86aa56ca-c6e9-4382-a9aa-fea6afc94ade" path="/var/lib/kubelet/pods/86aa56ca-c6e9-4382-a9aa-fea6afc94ade/volumes" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.244652 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb8e83be-8976-410c-908a-acdf8f18c10f" path="/var/lib/kubelet/pods/eb8e83be-8976-410c-908a-acdf8f18c10f/volumes" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.432000 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.758766 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5lmj2" event={"ID":"e36d9377-cbc4-4760-ae43-3065dfe614fe","Type":"ContainerStarted","Data":"f3de13ca3f4f9bd4e1900055b01a33f94ec033e21a24ac037642d9badcef9a9d"} Oct 02 18:43:47 crc kubenswrapper[4832]: I1002 18:43:47.912452 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:43:47 crc kubenswrapper[4832]: W1002 18:43:47.917254 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc497214_f31f_4ca9_9076_8ae4a83bb6f5.slice/crio-aa14e714b37e08b254f68c8c719336ee7e11d7c5ffb03aed9a84bf2d20db5d17 WatchSource:0}: Error finding container aa14e714b37e08b254f68c8c719336ee7e11d7c5ffb03aed9a84bf2d20db5d17: Status 404 returned error can't find the container with id aa14e714b37e08b254f68c8c719336ee7e11d7c5ffb03aed9a84bf2d20db5d17 Oct 02 18:43:48 crc kubenswrapper[4832]: I1002 18:43:48.233870 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:43:48 crc kubenswrapper[4832]: I1002 18:43:48.783786 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc497214-f31f-4ca9-9076-8ae4a83bb6f5","Type":"ContainerStarted","Data":"aa14e714b37e08b254f68c8c719336ee7e11d7c5ffb03aed9a84bf2d20db5d17"} Oct 02 18:43:49 crc kubenswrapper[4832]: I1002 18:43:49.396387 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 02 18:43:49 crc kubenswrapper[4832]: I1002 18:43:49.798613 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc497214-f31f-4ca9-9076-8ae4a83bb6f5","Type":"ContainerStarted","Data":"6c5bfe5439ba5db4b7c6085d3026c4e62a2ed6fd5c29eb8a1b46f38d4327d590"} Oct 02 18:43:50 crc kubenswrapper[4832]: I1002 18:43:50.831700 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc497214-f31f-4ca9-9076-8ae4a83bb6f5","Type":"ContainerStarted","Data":"b49bc42ee89a7c21b57ad9c1e44feab12dfbd4eabe14c23229503aff70706af5"} Oct 02 18:43:56 crc kubenswrapper[4832]: I1002 18:43:56.916688 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5lmj2" event={"ID":"e36d9377-cbc4-4760-ae43-3065dfe614fe","Type":"ContainerStarted","Data":"a7a0b534c3b086b1e082cb966e12050644137511336a787a16e8878826e1e870"} Oct 02 18:43:56 crc kubenswrapper[4832]: I1002 18:43:56.919950 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc497214-f31f-4ca9-9076-8ae4a83bb6f5","Type":"ContainerStarted","Data":"a95fff730d961dbc8c71d31b990668aa43823f28e774dd3edf445dbfe5cb4008"} Oct 02 18:43:56 crc kubenswrapper[4832]: I1002 18:43:56.942131 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-5lmj2" podStartSLOduration=2.387924497 podStartE2EDuration="11.942109956s" podCreationTimestamp="2025-10-02 18:43:45 +0000 UTC" firstStartedPulling="2025-10-02 18:43:46.876910395 +0000 UTC m=+1383.846353267" lastFinishedPulling="2025-10-02 18:43:56.431095844 +0000 UTC m=+1393.400538726" observedRunningTime="2025-10-02 18:43:56.933931554 +0000 UTC m=+1393.903374436" watchObservedRunningTime="2025-10-02 18:43:56.942109956 +0000 UTC m=+1393.911552828" Oct 02 18:43:57 crc kubenswrapper[4832]: I1002 18:43:57.036855 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-q2dcq"] Oct 02 18:43:57 crc kubenswrapper[4832]: I1002 18:43:57.038840 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-q2dcq" Oct 02 18:43:57 crc kubenswrapper[4832]: I1002 18:43:57.069300 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-q2dcq"] Oct 02 18:43:57 crc kubenswrapper[4832]: I1002 18:43:57.104561 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7vd7\" (UniqueName: \"kubernetes.io/projected/4776bec7-08b6-4900-8d4d-40074945b0dd-kube-api-access-q7vd7\") pod \"aodh-db-create-q2dcq\" (UID: \"4776bec7-08b6-4900-8d4d-40074945b0dd\") " pod="openstack/aodh-db-create-q2dcq" Oct 02 18:43:57 crc kubenswrapper[4832]: I1002 18:43:57.206303 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7vd7\" (UniqueName: \"kubernetes.io/projected/4776bec7-08b6-4900-8d4d-40074945b0dd-kube-api-access-q7vd7\") pod \"aodh-db-create-q2dcq\" (UID: \"4776bec7-08b6-4900-8d4d-40074945b0dd\") " pod="openstack/aodh-db-create-q2dcq" Oct 02 18:43:57 crc kubenswrapper[4832]: I1002 18:43:57.226583 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7vd7\" (UniqueName: \"kubernetes.io/projected/4776bec7-08b6-4900-8d4d-40074945b0dd-kube-api-access-q7vd7\") pod \"aodh-db-create-q2dcq\" (UID: \"4776bec7-08b6-4900-8d4d-40074945b0dd\") " pod="openstack/aodh-db-create-q2dcq" Oct 02 18:43:57 crc kubenswrapper[4832]: I1002 18:43:57.363793 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-q2dcq" Oct 02 18:43:57 crc kubenswrapper[4832]: I1002 18:43:57.867108 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-q2dcq"] Oct 02 18:43:57 crc kubenswrapper[4832]: W1002 18:43:57.871271 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4776bec7_08b6_4900_8d4d_40074945b0dd.slice/crio-1fd270678709ab45a76135ad2d18f0910c3160b535a57a3cf285ca16a2f7f5d3 WatchSource:0}: Error finding container 1fd270678709ab45a76135ad2d18f0910c3160b535a57a3cf285ca16a2f7f5d3: Status 404 returned error can't find the container with id 1fd270678709ab45a76135ad2d18f0910c3160b535a57a3cf285ca16a2f7f5d3 Oct 02 18:43:57 crc kubenswrapper[4832]: I1002 18:43:57.932300 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-q2dcq" event={"ID":"4776bec7-08b6-4900-8d4d-40074945b0dd","Type":"ContainerStarted","Data":"1fd270678709ab45a76135ad2d18f0910c3160b535a57a3cf285ca16a2f7f5d3"} Oct 02 18:43:59 crc kubenswrapper[4832]: I1002 18:43:59.959930 4832 generic.go:334] "Generic (PLEG): container finished" podID="4776bec7-08b6-4900-8d4d-40074945b0dd" containerID="b5a94a8f2d0ecff078823b934c29966423dad27aa002f78a91c6e12bf93bca94" exitCode=0 Oct 02 18:43:59 crc kubenswrapper[4832]: I1002 18:43:59.960484 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-q2dcq" event={"ID":"4776bec7-08b6-4900-8d4d-40074945b0dd","Type":"ContainerDied","Data":"b5a94a8f2d0ecff078823b934c29966423dad27aa002f78a91c6e12bf93bca94"} Oct 02 18:44:00 crc kubenswrapper[4832]: I1002 18:44:00.980859 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="ceilometer-central-agent" containerID="cri-o://6c5bfe5439ba5db4b7c6085d3026c4e62a2ed6fd5c29eb8a1b46f38d4327d590" gracePeriod=30 Oct 02 18:44:00 crc kubenswrapper[4832]: I1002 18:44:00.981029 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc497214-f31f-4ca9-9076-8ae4a83bb6f5","Type":"ContainerStarted","Data":"c6a3dbbaed5f039e2a1796ccd9d90e3acc941ff3041b7107b9bb253951f7e644"} Oct 02 18:44:00 crc kubenswrapper[4832]: I1002 18:44:00.981187 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="proxy-httpd" containerID="cri-o://c6a3dbbaed5f039e2a1796ccd9d90e3acc941ff3041b7107b9bb253951f7e644" gracePeriod=30 Oct 02 18:44:00 crc kubenswrapper[4832]: I1002 18:44:00.981245 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 02 18:44:00 crc kubenswrapper[4832]: I1002 18:44:00.981289 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="sg-core" containerID="cri-o://a95fff730d961dbc8c71d31b990668aa43823f28e774dd3edf445dbfe5cb4008" gracePeriod=30 Oct 02 18:44:00 crc kubenswrapper[4832]: I1002 18:44:00.981339 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="ceilometer-notification-agent" containerID="cri-o://b49bc42ee89a7c21b57ad9c1e44feab12dfbd4eabe14c23229503aff70706af5" gracePeriod=30 Oct 02 18:44:01 crc kubenswrapper[4832]: I1002 18:44:01.014840 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.112793169 podStartE2EDuration="14.014818872s" podCreationTimestamp="2025-10-02 18:43:47 +0000 UTC" firstStartedPulling="2025-10-02 18:43:47.919753068 +0000 UTC m=+1384.889195940" lastFinishedPulling="2025-10-02 18:43:59.821778741 +0000 UTC m=+1396.791221643" observedRunningTime="2025-10-02 18:44:01.007118504 +0000 UTC m=+1397.976561386" watchObservedRunningTime="2025-10-02 18:44:01.014818872 +0000 UTC m=+1397.984261744" Oct 02 18:44:02 crc kubenswrapper[4832]: I1002 18:44:02.003838 4832 generic.go:334] "Generic (PLEG): container finished" podID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerID="a95fff730d961dbc8c71d31b990668aa43823f28e774dd3edf445dbfe5cb4008" exitCode=2 Oct 02 18:44:02 crc kubenswrapper[4832]: I1002 18:44:02.005026 4832 generic.go:334] "Generic (PLEG): container finished" podID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerID="b49bc42ee89a7c21b57ad9c1e44feab12dfbd4eabe14c23229503aff70706af5" exitCode=0 Oct 02 18:44:02 crc kubenswrapper[4832]: I1002 18:44:02.005140 4832 generic.go:334] "Generic (PLEG): container finished" podID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerID="6c5bfe5439ba5db4b7c6085d3026c4e62a2ed6fd5c29eb8a1b46f38d4327d590" exitCode=0 Oct 02 18:44:02 crc kubenswrapper[4832]: I1002 18:44:02.005219 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc497214-f31f-4ca9-9076-8ae4a83bb6f5","Type":"ContainerDied","Data":"a95fff730d961dbc8c71d31b990668aa43823f28e774dd3edf445dbfe5cb4008"} Oct 02 18:44:02 crc kubenswrapper[4832]: I1002 18:44:02.005325 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc497214-f31f-4ca9-9076-8ae4a83bb6f5","Type":"ContainerDied","Data":"b49bc42ee89a7c21b57ad9c1e44feab12dfbd4eabe14c23229503aff70706af5"} Oct 02 18:44:02 crc kubenswrapper[4832]: I1002 18:44:02.005404 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc497214-f31f-4ca9-9076-8ae4a83bb6f5","Type":"ContainerDied","Data":"6c5bfe5439ba5db4b7c6085d3026c4e62a2ed6fd5c29eb8a1b46f38d4327d590"} Oct 02 18:44:04 crc kubenswrapper[4832]: I1002 18:44:04.752851 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-q2dcq" Oct 02 18:44:04 crc kubenswrapper[4832]: I1002 18:44:04.909915 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7vd7\" (UniqueName: \"kubernetes.io/projected/4776bec7-08b6-4900-8d4d-40074945b0dd-kube-api-access-q7vd7\") pod \"4776bec7-08b6-4900-8d4d-40074945b0dd\" (UID: \"4776bec7-08b6-4900-8d4d-40074945b0dd\") " Oct 02 18:44:04 crc kubenswrapper[4832]: I1002 18:44:04.916187 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4776bec7-08b6-4900-8d4d-40074945b0dd-kube-api-access-q7vd7" (OuterVolumeSpecName: "kube-api-access-q7vd7") pod "4776bec7-08b6-4900-8d4d-40074945b0dd" (UID: "4776bec7-08b6-4900-8d4d-40074945b0dd"). InnerVolumeSpecName "kube-api-access-q7vd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:44:05 crc kubenswrapper[4832]: I1002 18:44:05.013450 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7vd7\" (UniqueName: \"kubernetes.io/projected/4776bec7-08b6-4900-8d4d-40074945b0dd-kube-api-access-q7vd7\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:05 crc kubenswrapper[4832]: I1002 18:44:05.044630 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-q2dcq" event={"ID":"4776bec7-08b6-4900-8d4d-40074945b0dd","Type":"ContainerDied","Data":"1fd270678709ab45a76135ad2d18f0910c3160b535a57a3cf285ca16a2f7f5d3"} Oct 02 18:44:05 crc kubenswrapper[4832]: I1002 18:44:05.044673 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fd270678709ab45a76135ad2d18f0910c3160b535a57a3cf285ca16a2f7f5d3" Oct 02 18:44:05 crc kubenswrapper[4832]: I1002 18:44:05.044702 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-q2dcq" Oct 02 18:44:17 crc kubenswrapper[4832]: I1002 18:44:17.091316 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-eb3e-account-create-bjgln"] Oct 02 18:44:17 crc kubenswrapper[4832]: E1002 18:44:17.092561 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4776bec7-08b6-4900-8d4d-40074945b0dd" containerName="mariadb-database-create" Oct 02 18:44:17 crc kubenswrapper[4832]: I1002 18:44:17.092580 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4776bec7-08b6-4900-8d4d-40074945b0dd" containerName="mariadb-database-create" Oct 02 18:44:17 crc kubenswrapper[4832]: I1002 18:44:17.092885 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4776bec7-08b6-4900-8d4d-40074945b0dd" containerName="mariadb-database-create" Oct 02 18:44:17 crc kubenswrapper[4832]: I1002 18:44:17.093911 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-eb3e-account-create-bjgln" Oct 02 18:44:17 crc kubenswrapper[4832]: I1002 18:44:17.097372 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Oct 02 18:44:17 crc kubenswrapper[4832]: I1002 18:44:17.108020 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-eb3e-account-create-bjgln"] Oct 02 18:44:17 crc kubenswrapper[4832]: I1002 18:44:17.239010 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2rkc\" (UniqueName: \"kubernetes.io/projected/afde15af-0d85-4b31-8bb1-da2cdbf8bbe9-kube-api-access-p2rkc\") pod \"aodh-eb3e-account-create-bjgln\" (UID: \"afde15af-0d85-4b31-8bb1-da2cdbf8bbe9\") " pod="openstack/aodh-eb3e-account-create-bjgln" Oct 02 18:44:17 crc kubenswrapper[4832]: I1002 18:44:17.341067 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2rkc\" (UniqueName: \"kubernetes.io/projected/afde15af-0d85-4b31-8bb1-da2cdbf8bbe9-kube-api-access-p2rkc\") pod \"aodh-eb3e-account-create-bjgln\" (UID: \"afde15af-0d85-4b31-8bb1-da2cdbf8bbe9\") " pod="openstack/aodh-eb3e-account-create-bjgln" Oct 02 18:44:17 crc kubenswrapper[4832]: I1002 18:44:17.363503 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2rkc\" (UniqueName: \"kubernetes.io/projected/afde15af-0d85-4b31-8bb1-da2cdbf8bbe9-kube-api-access-p2rkc\") pod \"aodh-eb3e-account-create-bjgln\" (UID: \"afde15af-0d85-4b31-8bb1-da2cdbf8bbe9\") " pod="openstack/aodh-eb3e-account-create-bjgln" Oct 02 18:44:17 crc kubenswrapper[4832]: I1002 18:44:17.419452 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-eb3e-account-create-bjgln" Oct 02 18:44:17 crc kubenswrapper[4832]: I1002 18:44:17.459457 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Oct 02 18:44:17 crc kubenswrapper[4832]: I1002 18:44:17.903898 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-eb3e-account-create-bjgln"] Oct 02 18:44:17 crc kubenswrapper[4832]: W1002 18:44:17.912969 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafde15af_0d85_4b31_8bb1_da2cdbf8bbe9.slice/crio-5d484cfa6d503667c2bc7ce32de43c07838d032e56cb647d27af9ddf3b0b1b42 WatchSource:0}: Error finding container 5d484cfa6d503667c2bc7ce32de43c07838d032e56cb647d27af9ddf3b0b1b42: Status 404 returned error can't find the container with id 5d484cfa6d503667c2bc7ce32de43c07838d032e56cb647d27af9ddf3b0b1b42 Oct 02 18:44:18 crc kubenswrapper[4832]: I1002 18:44:18.210893 4832 generic.go:334] "Generic (PLEG): container finished" podID="afde15af-0d85-4b31-8bb1-da2cdbf8bbe9" containerID="454f486102f5d92d3d1f6ece94db22a7debb2fcb1b00e8b35d399856a093dfc8" exitCode=0 Oct 02 18:44:18 crc kubenswrapper[4832]: I1002 18:44:18.210955 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-eb3e-account-create-bjgln" event={"ID":"afde15af-0d85-4b31-8bb1-da2cdbf8bbe9","Type":"ContainerDied","Data":"454f486102f5d92d3d1f6ece94db22a7debb2fcb1b00e8b35d399856a093dfc8"} Oct 02 18:44:18 crc kubenswrapper[4832]: I1002 18:44:18.211012 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-eb3e-account-create-bjgln" event={"ID":"afde15af-0d85-4b31-8bb1-da2cdbf8bbe9","Type":"ContainerStarted","Data":"5d484cfa6d503667c2bc7ce32de43c07838d032e56cb647d27af9ddf3b0b1b42"} Oct 02 18:44:19 crc kubenswrapper[4832]: I1002 18:44:19.883886 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-eb3e-account-create-bjgln" Oct 02 18:44:20 crc kubenswrapper[4832]: I1002 18:44:20.011903 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2rkc\" (UniqueName: \"kubernetes.io/projected/afde15af-0d85-4b31-8bb1-da2cdbf8bbe9-kube-api-access-p2rkc\") pod \"afde15af-0d85-4b31-8bb1-da2cdbf8bbe9\" (UID: \"afde15af-0d85-4b31-8bb1-da2cdbf8bbe9\") " Oct 02 18:44:20 crc kubenswrapper[4832]: I1002 18:44:20.023550 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afde15af-0d85-4b31-8bb1-da2cdbf8bbe9-kube-api-access-p2rkc" (OuterVolumeSpecName: "kube-api-access-p2rkc") pod "afde15af-0d85-4b31-8bb1-da2cdbf8bbe9" (UID: "afde15af-0d85-4b31-8bb1-da2cdbf8bbe9"). InnerVolumeSpecName "kube-api-access-p2rkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:44:20 crc kubenswrapper[4832]: I1002 18:44:20.114849 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2rkc\" (UniqueName: \"kubernetes.io/projected/afde15af-0d85-4b31-8bb1-da2cdbf8bbe9-kube-api-access-p2rkc\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:20 crc kubenswrapper[4832]: I1002 18:44:20.240899 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-eb3e-account-create-bjgln" event={"ID":"afde15af-0d85-4b31-8bb1-da2cdbf8bbe9","Type":"ContainerDied","Data":"5d484cfa6d503667c2bc7ce32de43c07838d032e56cb647d27af9ddf3b0b1b42"} Oct 02 18:44:20 crc kubenswrapper[4832]: I1002 18:44:20.240962 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d484cfa6d503667c2bc7ce32de43c07838d032e56cb647d27af9ddf3b0b1b42" Oct 02 18:44:20 crc kubenswrapper[4832]: I1002 18:44:20.240985 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-eb3e-account-create-bjgln" Oct 02 18:44:20 crc kubenswrapper[4832]: I1002 18:44:20.242871 4832 generic.go:334] "Generic (PLEG): container finished" podID="e36d9377-cbc4-4760-ae43-3065dfe614fe" containerID="a7a0b534c3b086b1e082cb966e12050644137511336a787a16e8878826e1e870" exitCode=0 Oct 02 18:44:20 crc kubenswrapper[4832]: I1002 18:44:20.242918 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5lmj2" event={"ID":"e36d9377-cbc4-4760-ae43-3065dfe614fe","Type":"ContainerDied","Data":"a7a0b534c3b086b1e082cb966e12050644137511336a787a16e8878826e1e870"} Oct 02 18:44:21 crc kubenswrapper[4832]: I1002 18:44:21.811690 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:44:21 crc kubenswrapper[4832]: I1002 18:44:21.963945 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-combined-ca-bundle\") pod \"e36d9377-cbc4-4760-ae43-3065dfe614fe\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " Oct 02 18:44:21 crc kubenswrapper[4832]: I1002 18:44:21.964074 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-scripts\") pod \"e36d9377-cbc4-4760-ae43-3065dfe614fe\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " Oct 02 18:44:21 crc kubenswrapper[4832]: I1002 18:44:21.964130 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-config-data\") pod \"e36d9377-cbc4-4760-ae43-3065dfe614fe\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " Oct 02 18:44:21 crc kubenswrapper[4832]: I1002 18:44:21.964231 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s2hc\" (UniqueName: \"kubernetes.io/projected/e36d9377-cbc4-4760-ae43-3065dfe614fe-kube-api-access-5s2hc\") pod \"e36d9377-cbc4-4760-ae43-3065dfe614fe\" (UID: \"e36d9377-cbc4-4760-ae43-3065dfe614fe\") " Oct 02 18:44:21 crc kubenswrapper[4832]: I1002 18:44:21.969168 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e36d9377-cbc4-4760-ae43-3065dfe614fe-kube-api-access-5s2hc" (OuterVolumeSpecName: "kube-api-access-5s2hc") pod "e36d9377-cbc4-4760-ae43-3065dfe614fe" (UID: "e36d9377-cbc4-4760-ae43-3065dfe614fe"). InnerVolumeSpecName "kube-api-access-5s2hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:44:21 crc kubenswrapper[4832]: I1002 18:44:21.969852 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-scripts" (OuterVolumeSpecName: "scripts") pod "e36d9377-cbc4-4760-ae43-3065dfe614fe" (UID: "e36d9377-cbc4-4760-ae43-3065dfe614fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.002462 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e36d9377-cbc4-4760-ae43-3065dfe614fe" (UID: "e36d9377-cbc4-4760-ae43-3065dfe614fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.004667 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-config-data" (OuterVolumeSpecName: "config-data") pod "e36d9377-cbc4-4760-ae43-3065dfe614fe" (UID: "e36d9377-cbc4-4760-ae43-3065dfe614fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.067228 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s2hc\" (UniqueName: \"kubernetes.io/projected/e36d9377-cbc4-4760-ae43-3065dfe614fe-kube-api-access-5s2hc\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.067301 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.067317 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.067332 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e36d9377-cbc4-4760-ae43-3065dfe614fe-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.271935 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5lmj2" event={"ID":"e36d9377-cbc4-4760-ae43-3065dfe614fe","Type":"ContainerDied","Data":"f3de13ca3f4f9bd4e1900055b01a33f94ec033e21a24ac037642d9badcef9a9d"} Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.271988 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3de13ca3f4f9bd4e1900055b01a33f94ec033e21a24ac037642d9badcef9a9d" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.272058 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5lmj2" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.360930 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 02 18:44:22 crc kubenswrapper[4832]: E1002 18:44:22.361475 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afde15af-0d85-4b31-8bb1-da2cdbf8bbe9" containerName="mariadb-account-create" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.361495 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="afde15af-0d85-4b31-8bb1-da2cdbf8bbe9" containerName="mariadb-account-create" Oct 02 18:44:22 crc kubenswrapper[4832]: E1002 18:44:22.361530 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36d9377-cbc4-4760-ae43-3065dfe614fe" containerName="nova-cell0-conductor-db-sync" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.361552 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36d9377-cbc4-4760-ae43-3065dfe614fe" containerName="nova-cell0-conductor-db-sync" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.361744 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="afde15af-0d85-4b31-8bb1-da2cdbf8bbe9" containerName="mariadb-account-create" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.361763 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e36d9377-cbc4-4760-ae43-3065dfe614fe" containerName="nova-cell0-conductor-db-sync" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.362548 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.364339 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.364504 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-cwtfs" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.397818 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.478502 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr88q\" (UniqueName: \"kubernetes.io/projected/46d668ae-13cf-4e3f-a2c4-8b862cdeafcb-kube-api-access-zr88q\") pod \"nova-cell0-conductor-0\" (UID: \"46d668ae-13cf-4e3f-a2c4-8b862cdeafcb\") " pod="openstack/nova-cell0-conductor-0" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.478600 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d668ae-13cf-4e3f-a2c4-8b862cdeafcb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"46d668ae-13cf-4e3f-a2c4-8b862cdeafcb\") " pod="openstack/nova-cell0-conductor-0" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.478664 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d668ae-13cf-4e3f-a2c4-8b862cdeafcb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"46d668ae-13cf-4e3f-a2c4-8b862cdeafcb\") " pod="openstack/nova-cell0-conductor-0" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.581174 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d668ae-13cf-4e3f-a2c4-8b862cdeafcb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"46d668ae-13cf-4e3f-a2c4-8b862cdeafcb\") " pod="openstack/nova-cell0-conductor-0" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.581311 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d668ae-13cf-4e3f-a2c4-8b862cdeafcb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"46d668ae-13cf-4e3f-a2c4-8b862cdeafcb\") " pod="openstack/nova-cell0-conductor-0" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.581592 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr88q\" (UniqueName: \"kubernetes.io/projected/46d668ae-13cf-4e3f-a2c4-8b862cdeafcb-kube-api-access-zr88q\") pod \"nova-cell0-conductor-0\" (UID: \"46d668ae-13cf-4e3f-a2c4-8b862cdeafcb\") " pod="openstack/nova-cell0-conductor-0" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.586171 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d668ae-13cf-4e3f-a2c4-8b862cdeafcb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"46d668ae-13cf-4e3f-a2c4-8b862cdeafcb\") " pod="openstack/nova-cell0-conductor-0" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.602087 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d668ae-13cf-4e3f-a2c4-8b862cdeafcb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"46d668ae-13cf-4e3f-a2c4-8b862cdeafcb\") " pod="openstack/nova-cell0-conductor-0" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.603122 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr88q\" (UniqueName: \"kubernetes.io/projected/46d668ae-13cf-4e3f-a2c4-8b862cdeafcb-kube-api-access-zr88q\") pod \"nova-cell0-conductor-0\" (UID: \"46d668ae-13cf-4e3f-a2c4-8b862cdeafcb\") " pod="openstack/nova-cell0-conductor-0" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.633719 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-6zkpx"] Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.635285 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.639182 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.641051 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.642174 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-hgfvc" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.659162 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-6zkpx"] Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.687840 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.795883 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dw2n\" (UniqueName: \"kubernetes.io/projected/3c3da35e-519a-404c-91d1-5ca7f0071d2e-kube-api-access-5dw2n\") pod \"aodh-db-sync-6zkpx\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.795987 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-config-data\") pod \"aodh-db-sync-6zkpx\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.796051 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-combined-ca-bundle\") pod \"aodh-db-sync-6zkpx\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.796083 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-scripts\") pod \"aodh-db-sync-6zkpx\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.898051 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dw2n\" (UniqueName: \"kubernetes.io/projected/3c3da35e-519a-404c-91d1-5ca7f0071d2e-kube-api-access-5dw2n\") pod \"aodh-db-sync-6zkpx\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.898341 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-config-data\") pod \"aodh-db-sync-6zkpx\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.898401 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-combined-ca-bundle\") pod \"aodh-db-sync-6zkpx\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.898434 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-scripts\") pod \"aodh-db-sync-6zkpx\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.903444 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-combined-ca-bundle\") pod \"aodh-db-sync-6zkpx\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.903696 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-config-data\") pod \"aodh-db-sync-6zkpx\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.905547 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-scripts\") pod \"aodh-db-sync-6zkpx\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.918960 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dw2n\" (UniqueName: \"kubernetes.io/projected/3c3da35e-519a-404c-91d1-5ca7f0071d2e-kube-api-access-5dw2n\") pod \"aodh-db-sync-6zkpx\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:22 crc kubenswrapper[4832]: I1002 18:44:22.993976 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:23 crc kubenswrapper[4832]: I1002 18:44:23.281765 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 02 18:44:23 crc kubenswrapper[4832]: I1002 18:44:23.500499 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-6zkpx"] Oct 02 18:44:24 crc kubenswrapper[4832]: I1002 18:44:24.299046 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6zkpx" event={"ID":"3c3da35e-519a-404c-91d1-5ca7f0071d2e","Type":"ContainerStarted","Data":"a6d7ab264f0367d9e224e647a4a7c37e3bd15db0e70d91d98f2b889bec4dbfe3"} Oct 02 18:44:24 crc kubenswrapper[4832]: I1002 18:44:24.301767 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"46d668ae-13cf-4e3f-a2c4-8b862cdeafcb","Type":"ContainerStarted","Data":"fa58ad990c017ea53c1b4022514b3726849e05e803d5e08feba63d6870ad78e1"} Oct 02 18:44:24 crc kubenswrapper[4832]: I1002 18:44:24.301864 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"46d668ae-13cf-4e3f-a2c4-8b862cdeafcb","Type":"ContainerStarted","Data":"1b38e763e2603d115c62af92f262528fd9c8a07345b86094ba73ee3bb2929f14"} Oct 02 18:44:24 crc kubenswrapper[4832]: I1002 18:44:24.301903 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 02 18:44:24 crc kubenswrapper[4832]: I1002 18:44:24.321996 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.32196371 podStartE2EDuration="2.32196371s" podCreationTimestamp="2025-10-02 18:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:44:24.315132541 +0000 UTC m=+1421.284575413" watchObservedRunningTime="2025-10-02 18:44:24.32196371 +0000 UTC m=+1421.291406582" Oct 02 18:44:26 crc kubenswrapper[4832]: I1002 18:44:26.875322 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:44:26 crc kubenswrapper[4832]: I1002 18:44:26.875736 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:44:29 crc kubenswrapper[4832]: I1002 18:44:29.361769 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6zkpx" event={"ID":"3c3da35e-519a-404c-91d1-5ca7f0071d2e","Type":"ContainerStarted","Data":"098fcc088d803001483b69d8de0faf55d70fa0c89001553c45bb0d2813691889"} Oct 02 18:44:29 crc kubenswrapper[4832]: I1002 18:44:29.394828 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-6zkpx" podStartSLOduration=2.153263974 podStartE2EDuration="7.394800405s" podCreationTimestamp="2025-10-02 18:44:22 +0000 UTC" firstStartedPulling="2025-10-02 18:44:23.49965163 +0000 UTC m=+1420.469094502" lastFinishedPulling="2025-10-02 18:44:28.741188051 +0000 UTC m=+1425.710630933" observedRunningTime="2025-10-02 18:44:29.374375636 +0000 UTC m=+1426.343818528" watchObservedRunningTime="2025-10-02 18:44:29.394800405 +0000 UTC m=+1426.364243277" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.391447 4832 generic.go:334] "Generic (PLEG): container finished" podID="3c3da35e-519a-404c-91d1-5ca7f0071d2e" containerID="098fcc088d803001483b69d8de0faf55d70fa0c89001553c45bb0d2813691889" exitCode=0 Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.391822 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6zkpx" event={"ID":"3c3da35e-519a-404c-91d1-5ca7f0071d2e","Type":"ContainerDied","Data":"098fcc088d803001483b69d8de0faf55d70fa0c89001553c45bb0d2813691889"} Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.399859 4832 generic.go:334] "Generic (PLEG): container finished" podID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerID="c6a3dbbaed5f039e2a1796ccd9d90e3acc941ff3041b7107b9bb253951f7e644" exitCode=137 Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.399919 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc497214-f31f-4ca9-9076-8ae4a83bb6f5","Type":"ContainerDied","Data":"c6a3dbbaed5f039e2a1796ccd9d90e3acc941ff3041b7107b9bb253951f7e644"} Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.566302 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.620596 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-config-data\") pod \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.620651 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-run-httpd\") pod \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.620731 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-sg-core-conf-yaml\") pod \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.620756 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-combined-ca-bundle\") pod \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.620833 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-log-httpd\") pod \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.620919 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-scripts\") pod \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.620953 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72hdm\" (UniqueName: \"kubernetes.io/projected/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-kube-api-access-72hdm\") pod \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\" (UID: \"bc497214-f31f-4ca9-9076-8ae4a83bb6f5\") " Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.622073 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bc497214-f31f-4ca9-9076-8ae4a83bb6f5" (UID: "bc497214-f31f-4ca9-9076-8ae4a83bb6f5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.622340 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bc497214-f31f-4ca9-9076-8ae4a83bb6f5" (UID: "bc497214-f31f-4ca9-9076-8ae4a83bb6f5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.627757 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-scripts" (OuterVolumeSpecName: "scripts") pod "bc497214-f31f-4ca9-9076-8ae4a83bb6f5" (UID: "bc497214-f31f-4ca9-9076-8ae4a83bb6f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.635391 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-kube-api-access-72hdm" (OuterVolumeSpecName: "kube-api-access-72hdm") pod "bc497214-f31f-4ca9-9076-8ae4a83bb6f5" (UID: "bc497214-f31f-4ca9-9076-8ae4a83bb6f5"). InnerVolumeSpecName "kube-api-access-72hdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.655087 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bc497214-f31f-4ca9-9076-8ae4a83bb6f5" (UID: "bc497214-f31f-4ca9-9076-8ae4a83bb6f5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.724479 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.724529 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.724546 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72hdm\" (UniqueName: \"kubernetes.io/projected/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-kube-api-access-72hdm\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.724564 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.724580 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.737444 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc497214-f31f-4ca9-9076-8ae4a83bb6f5" (UID: "bc497214-f31f-4ca9-9076-8ae4a83bb6f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.756354 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-config-data" (OuterVolumeSpecName: "config-data") pod "bc497214-f31f-4ca9-9076-8ae4a83bb6f5" (UID: "bc497214-f31f-4ca9-9076-8ae4a83bb6f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.827110 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:31 crc kubenswrapper[4832]: I1002 18:44:31.827409 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc497214-f31f-4ca9-9076-8ae4a83bb6f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.414380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc497214-f31f-4ca9-9076-8ae4a83bb6f5","Type":"ContainerDied","Data":"aa14e714b37e08b254f68c8c719336ee7e11d7c5ffb03aed9a84bf2d20db5d17"} Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.414682 4832 scope.go:117] "RemoveContainer" containerID="c6a3dbbaed5f039e2a1796ccd9d90e3acc941ff3041b7107b9bb253951f7e644" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.414577 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.450662 4832 scope.go:117] "RemoveContainer" containerID="a95fff730d961dbc8c71d31b990668aa43823f28e774dd3edf445dbfe5cb4008" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.474212 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.482336 4832 scope.go:117] "RemoveContainer" containerID="b49bc42ee89a7c21b57ad9c1e44feab12dfbd4eabe14c23229503aff70706af5" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.491483 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.513450 4832 scope.go:117] "RemoveContainer" containerID="6c5bfe5439ba5db4b7c6085d3026c4e62a2ed6fd5c29eb8a1b46f38d4327d590" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.523961 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:44:32 crc kubenswrapper[4832]: E1002 18:44:32.524480 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="ceilometer-central-agent" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.524499 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="ceilometer-central-agent" Oct 02 18:44:32 crc kubenswrapper[4832]: E1002 18:44:32.524527 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="ceilometer-notification-agent" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.524534 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="ceilometer-notification-agent" Oct 02 18:44:32 crc kubenswrapper[4832]: E1002 18:44:32.524567 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="proxy-httpd" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.524573 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="proxy-httpd" Oct 02 18:44:32 crc kubenswrapper[4832]: E1002 18:44:32.524594 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="sg-core" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.524599 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="sg-core" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.524794 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="ceilometer-central-agent" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.524810 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="sg-core" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.524820 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="ceilometer-notification-agent" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.524830 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" containerName="proxy-httpd" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.527092 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.532169 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.532418 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.533408 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.647151 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e796fa8-fecd-474b-981e-61af334beee4-log-httpd\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.647281 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-scripts\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.647315 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-config-data\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.647348 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.647432 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp7t8\" (UniqueName: \"kubernetes.io/projected/1e796fa8-fecd-474b-981e-61af334beee4-kube-api-access-tp7t8\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.647551 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.647623 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e796fa8-fecd-474b-981e-61af334beee4-run-httpd\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.738588 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.748950 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp7t8\" (UniqueName: \"kubernetes.io/projected/1e796fa8-fecd-474b-981e-61af334beee4-kube-api-access-tp7t8\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.749043 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.749119 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e796fa8-fecd-474b-981e-61af334beee4-run-httpd\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.749189 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e796fa8-fecd-474b-981e-61af334beee4-log-httpd\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.749248 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-scripts\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.749292 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-config-data\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.749326 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.750320 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e796fa8-fecd-474b-981e-61af334beee4-run-httpd\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.750586 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e796fa8-fecd-474b-981e-61af334beee4-log-httpd\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.772623 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-scripts\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.772875 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-config-data\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.776721 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp7t8\" (UniqueName: \"kubernetes.io/projected/1e796fa8-fecd-474b-981e-61af334beee4-kube-api-access-tp7t8\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.780189 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.780565 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.856783 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:44:32 crc kubenswrapper[4832]: I1002 18:44:32.993865 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.054528 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dw2n\" (UniqueName: \"kubernetes.io/projected/3c3da35e-519a-404c-91d1-5ca7f0071d2e-kube-api-access-5dw2n\") pod \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.054634 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-combined-ca-bundle\") pod \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.054814 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-scripts\") pod \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.054897 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-config-data\") pod \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\" (UID: \"3c3da35e-519a-404c-91d1-5ca7f0071d2e\") " Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.061360 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c3da35e-519a-404c-91d1-5ca7f0071d2e-kube-api-access-5dw2n" (OuterVolumeSpecName: "kube-api-access-5dw2n") pod "3c3da35e-519a-404c-91d1-5ca7f0071d2e" (UID: "3c3da35e-519a-404c-91d1-5ca7f0071d2e"). InnerVolumeSpecName "kube-api-access-5dw2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.077876 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-scripts" (OuterVolumeSpecName: "scripts") pod "3c3da35e-519a-404c-91d1-5ca7f0071d2e" (UID: "3c3da35e-519a-404c-91d1-5ca7f0071d2e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.095496 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c3da35e-519a-404c-91d1-5ca7f0071d2e" (UID: "3c3da35e-519a-404c-91d1-5ca7f0071d2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.098391 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-config-data" (OuterVolumeSpecName: "config-data") pod "3c3da35e-519a-404c-91d1-5ca7f0071d2e" (UID: "3c3da35e-519a-404c-91d1-5ca7f0071d2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.158184 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.158216 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.158227 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dw2n\" (UniqueName: \"kubernetes.io/projected/3c3da35e-519a-404c-91d1-5ca7f0071d2e-kube-api-access-5dw2n\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.158256 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3da35e-519a-404c-91d1-5ca7f0071d2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.263552 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc497214-f31f-4ca9-9076-8ae4a83bb6f5" path="/var/lib/kubelet/pods/bc497214-f31f-4ca9-9076-8ae4a83bb6f5/volumes" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.353185 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-kr474"] Oct 02 18:44:33 crc kubenswrapper[4832]: E1002 18:44:33.353800 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3da35e-519a-404c-91d1-5ca7f0071d2e" containerName="aodh-db-sync" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.353825 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3da35e-519a-404c-91d1-5ca7f0071d2e" containerName="aodh-db-sync" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.354113 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3da35e-519a-404c-91d1-5ca7f0071d2e" containerName="aodh-db-sync" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.355150 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.357946 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.359758 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 02 18:44:33 crc kubenswrapper[4832]: W1002 18:44:33.400893 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e796fa8_fecd_474b_981e_61af334beee4.slice/crio-6ec59b427079cae51550ef1d171d721d3e5aad1fd0a9606a114808a2962647f8 WatchSource:0}: Error finding container 6ec59b427079cae51550ef1d171d721d3e5aad1fd0a9606a114808a2962647f8: Status 404 returned error can't find the container with id 6ec59b427079cae51550ef1d171d721d3e5aad1fd0a9606a114808a2962647f8 Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.442522 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-kr474"] Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.471553 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e796fa8-fecd-474b-981e-61af334beee4","Type":"ContainerStarted","Data":"6ec59b427079cae51550ef1d171d721d3e5aad1fd0a9606a114808a2962647f8"} Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.471765 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49z8x\" (UniqueName: \"kubernetes.io/projected/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-kube-api-access-49z8x\") pod \"nova-cell0-cell-mapping-kr474\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.471823 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kr474\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.471851 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-scripts\") pod \"nova-cell0-cell-mapping-kr474\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.471945 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-config-data\") pod \"nova-cell0-cell-mapping-kr474\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.489042 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6zkpx" event={"ID":"3c3da35e-519a-404c-91d1-5ca7f0071d2e","Type":"ContainerDied","Data":"a6d7ab264f0367d9e224e647a4a7c37e3bd15db0e70d91d98f2b889bec4dbfe3"} Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.489081 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6d7ab264f0367d9e224e647a4a7c37e3bd15db0e70d91d98f2b889bec4dbfe3" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.489145 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6zkpx" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.516752 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.575558 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.576865 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kr474\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.576912 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49z8x\" (UniqueName: \"kubernetes.io/projected/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-kube-api-access-49z8x\") pod \"nova-cell0-cell-mapping-kr474\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.576941 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-scripts\") pod \"nova-cell0-cell-mapping-kr474\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.577046 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-config-data\") pod \"nova-cell0-cell-mapping-kr474\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.577107 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.580958 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.582076 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-scripts\") pod \"nova-cell0-cell-mapping-kr474\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.590284 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kr474\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.596090 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-config-data\") pod \"nova-cell0-cell-mapping-kr474\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.603048 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49z8x\" (UniqueName: \"kubernetes.io/projected/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-kube-api-access-49z8x\") pod \"nova-cell0-cell-mapping-kr474\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.611893 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.672483 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.680117 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.680450 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhd6h\" (UniqueName: \"kubernetes.io/projected/0847b290-25a2-406a-8e3c-31952edbd846-kube-api-access-bhd6h\") pod \"nova-scheduler-0\" (UID: \"0847b290-25a2-406a-8e3c-31952edbd846\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.680611 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0847b290-25a2-406a-8e3c-31952edbd846-config-data\") pod \"nova-scheduler-0\" (UID: \"0847b290-25a2-406a-8e3c-31952edbd846\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.680659 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0847b290-25a2-406a-8e3c-31952edbd846-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0847b290-25a2-406a-8e3c-31952edbd846\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.684545 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.711573 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.747837 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.750513 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.753455 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-hgfvc" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.753759 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.755006 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.760653 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.784719 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-scripts\") pod \"aodh-0\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " pod="openstack/aodh-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.784978 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d8845-2d10-4a17-b10a-44ea05f9671d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " pod="openstack/nova-api-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.785024 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0847b290-25a2-406a-8e3c-31952edbd846-config-data\") pod \"nova-scheduler-0\" (UID: \"0847b290-25a2-406a-8e3c-31952edbd846\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.785074 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0847b290-25a2-406a-8e3c-31952edbd846-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0847b290-25a2-406a-8e3c-31952edbd846\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.785106 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ht72\" (UniqueName: \"kubernetes.io/projected/9d861870-773c-4994-b599-62adec02a99a-kube-api-access-6ht72\") pod \"aodh-0\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " pod="openstack/aodh-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.785131 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " pod="openstack/aodh-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.785155 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-config-data\") pod \"aodh-0\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " pod="openstack/aodh-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.785185 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhd6h\" (UniqueName: \"kubernetes.io/projected/0847b290-25a2-406a-8e3c-31952edbd846-kube-api-access-bhd6h\") pod \"nova-scheduler-0\" (UID: \"0847b290-25a2-406a-8e3c-31952edbd846\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.785212 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qcd8\" (UniqueName: \"kubernetes.io/projected/fe1d8845-2d10-4a17-b10a-44ea05f9671d-kube-api-access-7qcd8\") pod \"nova-api-0\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " pod="openstack/nova-api-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.785326 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d8845-2d10-4a17-b10a-44ea05f9671d-config-data\") pod \"nova-api-0\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " pod="openstack/nova-api-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.785363 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d8845-2d10-4a17-b10a-44ea05f9671d-logs\") pod \"nova-api-0\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " pod="openstack/nova-api-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.799233 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0847b290-25a2-406a-8e3c-31952edbd846-config-data\") pod \"nova-scheduler-0\" (UID: \"0847b290-25a2-406a-8e3c-31952edbd846\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.800493 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.802290 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.802419 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0847b290-25a2-406a-8e3c-31952edbd846-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0847b290-25a2-406a-8e3c-31952edbd846\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.810824 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.842871 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.859591 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhd6h\" (UniqueName: \"kubernetes.io/projected/0847b290-25a2-406a-8e3c-31952edbd846-kube-api-access-bhd6h\") pod \"nova-scheduler-0\" (UID: \"0847b290-25a2-406a-8e3c-31952edbd846\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.865402 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.883393 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.885305 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.888246 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.894339 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ht72\" (UniqueName: \"kubernetes.io/projected/9d861870-773c-4994-b599-62adec02a99a-kube-api-access-6ht72\") pod \"aodh-0\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " pod="openstack/aodh-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.894401 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " pod="openstack/aodh-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.894437 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-config-data\") pod \"aodh-0\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " pod="openstack/aodh-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.894519 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qcd8\" (UniqueName: \"kubernetes.io/projected/fe1d8845-2d10-4a17-b10a-44ea05f9671d-kube-api-access-7qcd8\") pod \"nova-api-0\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " pod="openstack/nova-api-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.894645 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d8845-2d10-4a17-b10a-44ea05f9671d-config-data\") pod \"nova-api-0\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " pod="openstack/nova-api-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.894688 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtz96\" (UniqueName: \"kubernetes.io/projected/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-kube-api-access-vtz96\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.894720 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d8845-2d10-4a17-b10a-44ea05f9671d-logs\") pod \"nova-api-0\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " pod="openstack/nova-api-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.894767 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-scripts\") pod \"aodh-0\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " pod="openstack/aodh-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.894804 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d8845-2d10-4a17-b10a-44ea05f9671d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " pod="openstack/nova-api-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.894826 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.894915 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.897029 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d8845-2d10-4a17-b10a-44ea05f9671d-logs\") pod \"nova-api-0\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " pod="openstack/nova-api-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.907094 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " pod="openstack/aodh-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.910129 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d8845-2d10-4a17-b10a-44ea05f9671d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " pod="openstack/nova-api-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.913962 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d8845-2d10-4a17-b10a-44ea05f9671d-config-data\") pod \"nova-api-0\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " pod="openstack/nova-api-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.917685 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-scripts\") pod \"aodh-0\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " pod="openstack/aodh-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.918489 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-config-data\") pod \"aodh-0\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " pod="openstack/aodh-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.922150 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ht72\" (UniqueName: \"kubernetes.io/projected/9d861870-773c-4994-b599-62adec02a99a-kube-api-access-6ht72\") pod \"aodh-0\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " pod="openstack/aodh-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.927862 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qcd8\" (UniqueName: \"kubernetes.io/projected/fe1d8845-2d10-4a17-b10a-44ea05f9671d-kube-api-access-7qcd8\") pod \"nova-api-0\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " pod="openstack/nova-api-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.938034 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.967128 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.968940 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-knxks"] Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.970970 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.997168 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.997253 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.997289 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55c9826-fe7e-4a17-800c-6e45446af3a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " pod="openstack/nova-metadata-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.997341 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr4jj\" (UniqueName: \"kubernetes.io/projected/b55c9826-fe7e-4a17-800c-6e45446af3a2-kube-api-access-jr4jj\") pod \"nova-metadata-0\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " pod="openstack/nova-metadata-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.997425 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55c9826-fe7e-4a17-800c-6e45446af3a2-config-data\") pod \"nova-metadata-0\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " pod="openstack/nova-metadata-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.998241 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b55c9826-fe7e-4a17-800c-6e45446af3a2-logs\") pod \"nova-metadata-0\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " pod="openstack/nova-metadata-0" Oct 02 18:44:33 crc kubenswrapper[4832]: I1002 18:44:33.998297 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtz96\" (UniqueName: \"kubernetes.io/projected/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-kube-api-access-vtz96\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.016715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.021185 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.023322 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-knxks"] Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.031194 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtz96\" (UniqueName: \"kubernetes.io/projected/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-kube-api-access-vtz96\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.034554 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.092390 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.111093 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b55c9826-fe7e-4a17-800c-6e45446af3a2-logs\") pod \"nova-metadata-0\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " pod="openstack/nova-metadata-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.111151 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2nz5\" (UniqueName: \"kubernetes.io/projected/09048752-dbc7-4dd6-98c9-c74b48acf66d-kube-api-access-f2nz5\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.111210 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.111271 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.111556 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b55c9826-fe7e-4a17-800c-6e45446af3a2-logs\") pod \"nova-metadata-0\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " pod="openstack/nova-metadata-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.113012 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55c9826-fe7e-4a17-800c-6e45446af3a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " pod="openstack/nova-metadata-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.113045 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-config\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.113675 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-dns-svc\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.113721 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr4jj\" (UniqueName: \"kubernetes.io/projected/b55c9826-fe7e-4a17-800c-6e45446af3a2-kube-api-access-jr4jj\") pod \"nova-metadata-0\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " pod="openstack/nova-metadata-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.113839 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.113923 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55c9826-fe7e-4a17-800c-6e45446af3a2-config-data\") pod \"nova-metadata-0\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " pod="openstack/nova-metadata-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.119503 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55c9826-fe7e-4a17-800c-6e45446af3a2-config-data\") pod \"nova-metadata-0\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " pod="openstack/nova-metadata-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.126844 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55c9826-fe7e-4a17-800c-6e45446af3a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " pod="openstack/nova-metadata-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.145292 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr4jj\" (UniqueName: \"kubernetes.io/projected/b55c9826-fe7e-4a17-800c-6e45446af3a2-kube-api-access-jr4jj\") pod \"nova-metadata-0\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " pod="openstack/nova-metadata-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.200483 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.216833 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2nz5\" (UniqueName: \"kubernetes.io/projected/09048752-dbc7-4dd6-98c9-c74b48acf66d-kube-api-access-f2nz5\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.216904 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.216950 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.216985 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-config\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.217026 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-dns-svc\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.217079 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.218852 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.219724 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.220509 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-config\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.221082 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.221874 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-dns-svc\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.235058 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.273540 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2nz5\" (UniqueName: \"kubernetes.io/projected/09048752-dbc7-4dd6-98c9-c74b48acf66d-kube-api-access-f2nz5\") pod \"dnsmasq-dns-9b86998b5-knxks\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.299125 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.463303 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-kr474"] Oct 02 18:44:34 crc kubenswrapper[4832]: W1002 18:44:34.483371 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded3933a2_ea03_4354_bfa4_1ec240e12c9d.slice/crio-c79bbbc54e078b0ddc724d0a00073fa4dfee393097bb73e500a987eaf53350a7 WatchSource:0}: Error finding container c79bbbc54e078b0ddc724d0a00073fa4dfee393097bb73e500a987eaf53350a7: Status 404 returned error can't find the container with id c79bbbc54e078b0ddc724d0a00073fa4dfee393097bb73e500a987eaf53350a7 Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.528694 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kr474" event={"ID":"ed3933a2-ea03-4354-bfa4-1ec240e12c9d","Type":"ContainerStarted","Data":"c79bbbc54e078b0ddc724d0a00073fa4dfee393097bb73e500a987eaf53350a7"} Oct 02 18:44:34 crc kubenswrapper[4832]: I1002 18:44:34.717519 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.090713 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zznjp"] Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.092688 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.100098 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.100100 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.103912 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zznjp"] Oct 02 18:44:35 crc kubenswrapper[4832]: W1002 18:44:35.144429 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce7aa817_b0e0_44b8_afb5_bf3a3e6e362c.slice/crio-c2324dad69118b50d160ca2f6f824c545223b8aa42495f2f489f9a03f710c42d WatchSource:0}: Error finding container c2324dad69118b50d160ca2f6f824c545223b8aa42495f2f489f9a03f710c42d: Status 404 returned error can't find the container with id c2324dad69118b50d160ca2f6f824c545223b8aa42495f2f489f9a03f710c42d Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.165443 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zznjp\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.165550 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwxn7\" (UniqueName: \"kubernetes.io/projected/88751a34-122e-469a-955d-d91072955b66-kube-api-access-xwxn7\") pod \"nova-cell1-conductor-db-sync-zznjp\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.165612 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-config-data\") pod \"nova-cell1-conductor-db-sync-zznjp\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.165710 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-scripts\") pod \"nova-cell1-conductor-db-sync-zznjp\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:35 crc kubenswrapper[4832]: W1002 18:44:35.168223 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice/crio-bb47efe3239d62553afdab9e6369edcf08953eded46b79974a0902b50aea755c WatchSource:0}: Error finding container bb47efe3239d62553afdab9e6369edcf08953eded46b79974a0902b50aea755c: Status 404 returned error can't find the container with id bb47efe3239d62553afdab9e6369edcf08953eded46b79974a0902b50aea755c Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.175078 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.198872 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.217152 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.269445 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwxn7\" (UniqueName: \"kubernetes.io/projected/88751a34-122e-469a-955d-d91072955b66-kube-api-access-xwxn7\") pod \"nova-cell1-conductor-db-sync-zznjp\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.269761 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-config-data\") pod \"nova-cell1-conductor-db-sync-zznjp\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.269844 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-scripts\") pod \"nova-cell1-conductor-db-sync-zznjp\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.269903 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zznjp\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.279415 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zznjp\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.291113 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-config-data\") pod \"nova-cell1-conductor-db-sync-zznjp\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.302443 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-scripts\") pod \"nova-cell1-conductor-db-sync-zznjp\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.306399 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwxn7\" (UniqueName: \"kubernetes.io/projected/88751a34-122e-469a-955d-d91072955b66-kube-api-access-xwxn7\") pod \"nova-cell1-conductor-db-sync-zznjp\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.433351 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.465099 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-knxks"] Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.493487 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.564873 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kr474" event={"ID":"ed3933a2-ea03-4354-bfa4-1ec240e12c9d","Type":"ContainerStarted","Data":"246891ccf1730a456c880550fbb7c7c2019276914070764ba9f3d90faadbcca5"} Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.594043 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-kr474" podStartSLOduration=2.591763785 podStartE2EDuration="2.591763785s" podCreationTimestamp="2025-10-02 18:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:44:35.585929366 +0000 UTC m=+1432.555372238" watchObservedRunningTime="2025-10-02 18:44:35.591763785 +0000 UTC m=+1432.561206657" Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.603542 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-knxks" event={"ID":"09048752-dbc7-4dd6-98c9-c74b48acf66d","Type":"ContainerStarted","Data":"5d84d8ed60e6f5631f9f073fb80e519d35ede3f150342746106ec8896b93659b"} Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.619939 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c","Type":"ContainerStarted","Data":"c2324dad69118b50d160ca2f6f824c545223b8aa42495f2f489f9a03f710c42d"} Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.643639 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe1d8845-2d10-4a17-b10a-44ea05f9671d","Type":"ContainerStarted","Data":"1250d72a3a090ea49871efb5919c7ab31781fe1a5d522735d6c1f0ccc2c48007"} Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.649038 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0847b290-25a2-406a-8e3c-31952edbd846","Type":"ContainerStarted","Data":"bb47efe3239d62553afdab9e6369edcf08953eded46b79974a0902b50aea755c"} Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.662046 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e796fa8-fecd-474b-981e-61af334beee4","Type":"ContainerStarted","Data":"da09935512be5e40b8efd16470d6d0ac96ae59fe10ea93118bec628b66838315"} Oct 02 18:44:35 crc kubenswrapper[4832]: I1002 18:44:35.667320 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d861870-773c-4994-b599-62adec02a99a","Type":"ContainerStarted","Data":"d8974936518950d561faf59616c897cb0104fe3a253348dd0dc4c85439254cba"} Oct 02 18:44:36 crc kubenswrapper[4832]: I1002 18:44:36.168595 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zznjp"] Oct 02 18:44:36 crc kubenswrapper[4832]: I1002 18:44:36.703428 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d861870-773c-4994-b599-62adec02a99a","Type":"ContainerStarted","Data":"5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26"} Oct 02 18:44:36 crc kubenswrapper[4832]: I1002 18:44:36.707562 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b55c9826-fe7e-4a17-800c-6e45446af3a2","Type":"ContainerStarted","Data":"ac6ba81e475114cd506efbd68f33ea66521bd88b0103e37d34f39d7cf6614272"} Oct 02 18:44:36 crc kubenswrapper[4832]: I1002 18:44:36.719842 4832 generic.go:334] "Generic (PLEG): container finished" podID="09048752-dbc7-4dd6-98c9-c74b48acf66d" containerID="3553efff0e8a732fa0f4252dd5919a2ab7477ff2f33f3a737307b0359397c720" exitCode=0 Oct 02 18:44:36 crc kubenswrapper[4832]: I1002 18:44:36.719996 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-knxks" event={"ID":"09048752-dbc7-4dd6-98c9-c74b48acf66d","Type":"ContainerDied","Data":"3553efff0e8a732fa0f4252dd5919a2ab7477ff2f33f3a737307b0359397c720"} Oct 02 18:44:36 crc kubenswrapper[4832]: I1002 18:44:36.740682 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zznjp" event={"ID":"88751a34-122e-469a-955d-d91072955b66","Type":"ContainerStarted","Data":"5e38f70d423123e3f08667789ee3ffde916ddf05db59ebb4371ef66e751ff326"} Oct 02 18:44:36 crc kubenswrapper[4832]: I1002 18:44:36.759847 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e796fa8-fecd-474b-981e-61af334beee4","Type":"ContainerStarted","Data":"de1b04afa2931ad448f7bf9605599f109acc91870715d38f68605e0c677f884d"} Oct 02 18:44:37 crc kubenswrapper[4832]: I1002 18:44:37.204403 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:37 crc kubenswrapper[4832]: I1002 18:44:37.214937 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 18:44:37 crc kubenswrapper[4832]: I1002 18:44:37.773169 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e796fa8-fecd-474b-981e-61af334beee4","Type":"ContainerStarted","Data":"9cea6ba09ce766a77e14f9ae90fd2d71d92226a7d026140efa4d234b484c6570"} Oct 02 18:44:37 crc kubenswrapper[4832]: I1002 18:44:37.779611 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-knxks" event={"ID":"09048752-dbc7-4dd6-98c9-c74b48acf66d","Type":"ContainerStarted","Data":"bec0a71cb6e0992991d837b29640a3f4051f0be9bdd55e136f188d6d9f997570"} Oct 02 18:44:37 crc kubenswrapper[4832]: I1002 18:44:37.780689 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:37 crc kubenswrapper[4832]: I1002 18:44:37.785526 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zznjp" event={"ID":"88751a34-122e-469a-955d-d91072955b66","Type":"ContainerStarted","Data":"e19f98130721a44029df32a8019885e9a7a6092e0ba9be01c2aa810338882db8"} Oct 02 18:44:37 crc kubenswrapper[4832]: I1002 18:44:37.802476 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-knxks" podStartSLOduration=4.802459514 podStartE2EDuration="4.802459514s" podCreationTimestamp="2025-10-02 18:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:44:37.796338076 +0000 UTC m=+1434.765780948" watchObservedRunningTime="2025-10-02 18:44:37.802459514 +0000 UTC m=+1434.771902386" Oct 02 18:44:37 crc kubenswrapper[4832]: I1002 18:44:37.825878 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-zznjp" podStartSLOduration=2.825858705 podStartE2EDuration="2.825858705s" podCreationTimestamp="2025-10-02 18:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:44:37.811812712 +0000 UTC m=+1434.781255584" watchObservedRunningTime="2025-10-02 18:44:37.825858705 +0000 UTC m=+1434.795301577" Oct 02 18:44:40 crc kubenswrapper[4832]: I1002 18:44:40.719759 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:44:41 crc kubenswrapper[4832]: I1002 18:44:41.818411 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.850583 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d861870-773c-4994-b599-62adec02a99a","Type":"ContainerStarted","Data":"58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba"} Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.855572 4832 generic.go:334] "Generic (PLEG): container finished" podID="ed3933a2-ea03-4354-bfa4-1ec240e12c9d" containerID="246891ccf1730a456c880550fbb7c7c2019276914070764ba9f3d90faadbcca5" exitCode=0 Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.855632 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kr474" event={"ID":"ed3933a2-ea03-4354-bfa4-1ec240e12c9d","Type":"ContainerDied","Data":"246891ccf1730a456c880550fbb7c7c2019276914070764ba9f3d90faadbcca5"} Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.860244 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b55c9826-fe7e-4a17-800c-6e45446af3a2","Type":"ContainerStarted","Data":"ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0"} Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.860300 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b55c9826-fe7e-4a17-800c-6e45446af3a2","Type":"ContainerStarted","Data":"f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8"} Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.860423 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b55c9826-fe7e-4a17-800c-6e45446af3a2" containerName="nova-metadata-log" containerID="cri-o://f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8" gracePeriod=30 Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.860659 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b55c9826-fe7e-4a17-800c-6e45446af3a2" containerName="nova-metadata-metadata" containerID="cri-o://ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0" gracePeriod=30 Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.867251 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c","Type":"ContainerStarted","Data":"9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5"} Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.867391 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5" gracePeriod=30 Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.880732 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe1d8845-2d10-4a17-b10a-44ea05f9671d","Type":"ContainerStarted","Data":"386860b1230a30fecc03bcb0b6fad63d700dd70a271f52fd5082f99499c4c9fb"} Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.880801 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe1d8845-2d10-4a17-b10a-44ea05f9671d","Type":"ContainerStarted","Data":"16c0ab1b19c273dbce87502b663e0d495bd5210b1edcb021665d3a896fb75e10"} Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.889638 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0847b290-25a2-406a-8e3c-31952edbd846","Type":"ContainerStarted","Data":"d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c"} Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.892558 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.017895946 podStartE2EDuration="10.892538254s" podCreationTimestamp="2025-10-02 18:44:33 +0000 UTC" firstStartedPulling="2025-10-02 18:44:35.573797352 +0000 UTC m=+1432.543240224" lastFinishedPulling="2025-10-02 18:44:42.44843966 +0000 UTC m=+1439.417882532" observedRunningTime="2025-10-02 18:44:43.884703543 +0000 UTC m=+1440.854146415" watchObservedRunningTime="2025-10-02 18:44:43.892538254 +0000 UTC m=+1440.861981126" Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.901488 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e796fa8-fecd-474b-981e-61af334beee4","Type":"ContainerStarted","Data":"031ee0e51f42f30312df24f221d8d9c06a69fe226047e27108c774e513c9107c"} Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.901567 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="sg-core" containerID="cri-o://9cea6ba09ce766a77e14f9ae90fd2d71d92226a7d026140efa4d234b484c6570" gracePeriod=30 Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.901835 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.901848 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="proxy-httpd" containerID="cri-o://031ee0e51f42f30312df24f221d8d9c06a69fe226047e27108c774e513c9107c" gracePeriod=30 Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.901874 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="ceilometer-notification-agent" containerID="cri-o://de1b04afa2931ad448f7bf9605599f109acc91870715d38f68605e0c677f884d" gracePeriod=30 Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.901500 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="ceilometer-central-agent" containerID="cri-o://da09935512be5e40b8efd16470d6d0ac96ae59fe10ea93118bec628b66838315" gracePeriod=30 Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.919173 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.64308011 podStartE2EDuration="10.918836454s" podCreationTimestamp="2025-10-02 18:44:33 +0000 UTC" firstStartedPulling="2025-10-02 18:44:35.146803109 +0000 UTC m=+1432.116245981" lastFinishedPulling="2025-10-02 18:44:42.422559453 +0000 UTC m=+1439.392002325" observedRunningTime="2025-10-02 18:44:43.904075909 +0000 UTC m=+1440.873518781" watchObservedRunningTime="2025-10-02 18:44:43.918836454 +0000 UTC m=+1440.888279336" Oct 02 18:44:43 crc kubenswrapper[4832]: I1002 18:44:43.929883 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.676893792 podStartE2EDuration="10.929866424s" podCreationTimestamp="2025-10-02 18:44:33 +0000 UTC" firstStartedPulling="2025-10-02 18:44:35.169630192 +0000 UTC m=+1432.139073064" lastFinishedPulling="2025-10-02 18:44:42.422602824 +0000 UTC m=+1439.392045696" observedRunningTime="2025-10-02 18:44:43.925017995 +0000 UTC m=+1440.894460867" watchObservedRunningTime="2025-10-02 18:44:43.929866424 +0000 UTC m=+1440.899309296" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:43.999798 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.000038 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.018248 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.979174164 podStartE2EDuration="12.018229416s" podCreationTimestamp="2025-10-02 18:44:32 +0000 UTC" firstStartedPulling="2025-10-02 18:44:33.40879277 +0000 UTC m=+1430.378235642" lastFinishedPulling="2025-10-02 18:44:42.447848022 +0000 UTC m=+1439.417290894" observedRunningTime="2025-10-02 18:44:43.99919385 +0000 UTC m=+1440.968636722" watchObservedRunningTime="2025-10-02 18:44:44.018229416 +0000 UTC m=+1440.987672278" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.035625 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.035683 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.045381 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.353836811 podStartE2EDuration="11.045357422s" podCreationTimestamp="2025-10-02 18:44:33 +0000 UTC" firstStartedPulling="2025-10-02 18:44:34.731475235 +0000 UTC m=+1431.700918107" lastFinishedPulling="2025-10-02 18:44:42.422995846 +0000 UTC m=+1439.392438718" observedRunningTime="2025-10-02 18:44:44.035523319 +0000 UTC m=+1441.004966191" watchObservedRunningTime="2025-10-02 18:44:44.045357422 +0000 UTC m=+1441.014800294" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.071962 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.202508 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.236314 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.236373 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.302480 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.418915 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-q287c"] Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.419138 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-q287c" podUID="ecd658b9-1c22-4778-afde-b392155b499a" containerName="dnsmasq-dns" containerID="cri-o://7e538ab23eed35a72ba35c848b06cc5c8bc855896563d008d0a6e697c2c7a86f" gracePeriod=10 Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.897471 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.929260 4832 generic.go:334] "Generic (PLEG): container finished" podID="1e796fa8-fecd-474b-981e-61af334beee4" containerID="031ee0e51f42f30312df24f221d8d9c06a69fe226047e27108c774e513c9107c" exitCode=0 Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.929301 4832 generic.go:334] "Generic (PLEG): container finished" podID="1e796fa8-fecd-474b-981e-61af334beee4" containerID="9cea6ba09ce766a77e14f9ae90fd2d71d92226a7d026140efa4d234b484c6570" exitCode=2 Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.929308 4832 generic.go:334] "Generic (PLEG): container finished" podID="1e796fa8-fecd-474b-981e-61af334beee4" containerID="de1b04afa2931ad448f7bf9605599f109acc91870715d38f68605e0c677f884d" exitCode=0 Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.929314 4832 generic.go:334] "Generic (PLEG): container finished" podID="1e796fa8-fecd-474b-981e-61af334beee4" containerID="da09935512be5e40b8efd16470d6d0ac96ae59fe10ea93118bec628b66838315" exitCode=0 Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.929354 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e796fa8-fecd-474b-981e-61af334beee4","Type":"ContainerDied","Data":"031ee0e51f42f30312df24f221d8d9c06a69fe226047e27108c774e513c9107c"} Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.929380 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e796fa8-fecd-474b-981e-61af334beee4","Type":"ContainerDied","Data":"9cea6ba09ce766a77e14f9ae90fd2d71d92226a7d026140efa4d234b484c6570"} Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.929392 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e796fa8-fecd-474b-981e-61af334beee4","Type":"ContainerDied","Data":"de1b04afa2931ad448f7bf9605599f109acc91870715d38f68605e0c677f884d"} Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.929401 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e796fa8-fecd-474b-981e-61af334beee4","Type":"ContainerDied","Data":"da09935512be5e40b8efd16470d6d0ac96ae59fe10ea93118bec628b66838315"} Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.938466 4832 generic.go:334] "Generic (PLEG): container finished" podID="ecd658b9-1c22-4778-afde-b392155b499a" containerID="7e538ab23eed35a72ba35c848b06cc5c8bc855896563d008d0a6e697c2c7a86f" exitCode=0 Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.938538 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-q287c" event={"ID":"ecd658b9-1c22-4778-afde-b392155b499a","Type":"ContainerDied","Data":"7e538ab23eed35a72ba35c848b06cc5c8bc855896563d008d0a6e697c2c7a86f"} Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.946657 4832 generic.go:334] "Generic (PLEG): container finished" podID="b55c9826-fe7e-4a17-800c-6e45446af3a2" containerID="ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0" exitCode=0 Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.946699 4832 generic.go:334] "Generic (PLEG): container finished" podID="b55c9826-fe7e-4a17-800c-6e45446af3a2" containerID="f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8" exitCode=143 Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.948045 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.948646 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b55c9826-fe7e-4a17-800c-6e45446af3a2","Type":"ContainerDied","Data":"ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0"} Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.948681 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b55c9826-fe7e-4a17-800c-6e45446af3a2","Type":"ContainerDied","Data":"f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8"} Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.948698 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b55c9826-fe7e-4a17-800c-6e45446af3a2","Type":"ContainerDied","Data":"ac6ba81e475114cd506efbd68f33ea66521bd88b0103e37d34f39d7cf6614272"} Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.948864 4832 scope.go:117] "RemoveContainer" containerID="ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.979207 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55c9826-fe7e-4a17-800c-6e45446af3a2-combined-ca-bundle\") pod \"b55c9826-fe7e-4a17-800c-6e45446af3a2\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.979328 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr4jj\" (UniqueName: \"kubernetes.io/projected/b55c9826-fe7e-4a17-800c-6e45446af3a2-kube-api-access-jr4jj\") pod \"b55c9826-fe7e-4a17-800c-6e45446af3a2\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.979399 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55c9826-fe7e-4a17-800c-6e45446af3a2-config-data\") pod \"b55c9826-fe7e-4a17-800c-6e45446af3a2\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.979451 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b55c9826-fe7e-4a17-800c-6e45446af3a2-logs\") pod \"b55c9826-fe7e-4a17-800c-6e45446af3a2\" (UID: \"b55c9826-fe7e-4a17-800c-6e45446af3a2\") " Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.980605 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b55c9826-fe7e-4a17-800c-6e45446af3a2-logs" (OuterVolumeSpecName: "logs") pod "b55c9826-fe7e-4a17-800c-6e45446af3a2" (UID: "b55c9826-fe7e-4a17-800c-6e45446af3a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:44:44 crc kubenswrapper[4832]: I1002 18:44:44.992481 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b55c9826-fe7e-4a17-800c-6e45446af3a2-kube-api-access-jr4jj" (OuterVolumeSpecName: "kube-api-access-jr4jj") pod "b55c9826-fe7e-4a17-800c-6e45446af3a2" (UID: "b55c9826-fe7e-4a17-800c-6e45446af3a2"). InnerVolumeSpecName "kube-api-access-jr4jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.019658 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55c9826-fe7e-4a17-800c-6e45446af3a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b55c9826-fe7e-4a17-800c-6e45446af3a2" (UID: "b55c9826-fe7e-4a17-800c-6e45446af3a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.020433 4832 scope.go:117] "RemoveContainer" containerID="f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.020613 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.025458 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b55c9826-fe7e-4a17-800c-6e45446af3a2-config-data" (OuterVolumeSpecName: "config-data") pod "b55c9826-fe7e-4a17-800c-6e45446af3a2" (UID: "b55c9826-fe7e-4a17-800c-6e45446af3a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.053254 4832 scope.go:117] "RemoveContainer" containerID="ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0" Oct 02 18:44:45 crc kubenswrapper[4832]: E1002 18:44:45.053674 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0\": container with ID starting with ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0 not found: ID does not exist" containerID="ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.053697 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0"} err="failed to get container status \"ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0\": rpc error: code = NotFound desc = could not find container \"ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0\": container with ID starting with ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0 not found: ID does not exist" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.053716 4832 scope.go:117] "RemoveContainer" containerID="f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8" Oct 02 18:44:45 crc kubenswrapper[4832]: E1002 18:44:45.053881 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8\": container with ID starting with f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8 not found: ID does not exist" containerID="f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.053896 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8"} err="failed to get container status \"f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8\": rpc error: code = NotFound desc = could not find container \"f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8\": container with ID starting with f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8 not found: ID does not exist" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.053910 4832 scope.go:117] "RemoveContainer" containerID="ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.054057 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0"} err="failed to get container status \"ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0\": rpc error: code = NotFound desc = could not find container \"ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0\": container with ID starting with ad22ad09d9977a6d06f6afeb0926bac84376007e01518fc35807d1c272e464c0 not found: ID does not exist" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.054070 4832 scope.go:117] "RemoveContainer" containerID="f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.054223 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8"} err="failed to get container status \"f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8\": rpc error: code = NotFound desc = could not find container \"f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8\": container with ID starting with f390f0d72e399f1a6804e40328e1f441ac86afa531f20aa8159550f072c7b5d8 not found: ID does not exist" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.082247 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b55c9826-fe7e-4a17-800c-6e45446af3a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.082309 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr4jj\" (UniqueName: \"kubernetes.io/projected/b55c9826-fe7e-4a17-800c-6e45446af3a2-kube-api-access-jr4jj\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.082325 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b55c9826-fe7e-4a17-800c-6e45446af3a2-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.082339 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b55c9826-fe7e-4a17-800c-6e45446af3a2-logs\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.119404 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fe1d8845-2d10-4a17-b10a-44ea05f9671d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.235:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.119898 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fe1d8845-2d10-4a17-b10a-44ea05f9671d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.235:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.342455 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.358667 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.377747 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:45 crc kubenswrapper[4832]: E1002 18:44:45.378286 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55c9826-fe7e-4a17-800c-6e45446af3a2" containerName="nova-metadata-log" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.378303 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55c9826-fe7e-4a17-800c-6e45446af3a2" containerName="nova-metadata-log" Oct 02 18:44:45 crc kubenswrapper[4832]: E1002 18:44:45.378353 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b55c9826-fe7e-4a17-800c-6e45446af3a2" containerName="nova-metadata-metadata" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.378361 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b55c9826-fe7e-4a17-800c-6e45446af3a2" containerName="nova-metadata-metadata" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.378585 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b55c9826-fe7e-4a17-800c-6e45446af3a2" containerName="nova-metadata-log" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.378614 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b55c9826-fe7e-4a17-800c-6e45446af3a2" containerName="nova-metadata-metadata" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.380112 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.383350 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.383882 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.398784 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.506067 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-config-data\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.506172 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.506231 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/071fea74-b76f-4aa3-b6da-77c876fb1234-logs\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.506299 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.506341 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4tw5\" (UniqueName: \"kubernetes.io/projected/071fea74-b76f-4aa3-b6da-77c876fb1234-kube-api-access-z4tw5\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.608568 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4tw5\" (UniqueName: \"kubernetes.io/projected/071fea74-b76f-4aa3-b6da-77c876fb1234-kube-api-access-z4tw5\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.609116 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-config-data\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.609286 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.609413 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/071fea74-b76f-4aa3-b6da-77c876fb1234-logs\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.609544 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.610839 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/071fea74-b76f-4aa3-b6da-77c876fb1234-logs\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.617926 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.618822 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.627662 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4tw5\" (UniqueName: \"kubernetes.io/projected/071fea74-b76f-4aa3-b6da-77c876fb1234-kube-api-access-z4tw5\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.629408 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-config-data\") pod \"nova-metadata-0\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " pod="openstack/nova-metadata-0" Oct 02 18:44:45 crc kubenswrapper[4832]: I1002 18:44:45.708229 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.448428 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.526882 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.558569 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.567269 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-sg-core-conf-yaml\") pod \"1e796fa8-fecd-474b-981e-61af334beee4\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.567622 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e796fa8-fecd-474b-981e-61af334beee4-log-httpd\") pod \"1e796fa8-fecd-474b-981e-61af334beee4\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.567798 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp7t8\" (UniqueName: \"kubernetes.io/projected/1e796fa8-fecd-474b-981e-61af334beee4-kube-api-access-tp7t8\") pod \"1e796fa8-fecd-474b-981e-61af334beee4\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.567824 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-combined-ca-bundle\") pod \"1e796fa8-fecd-474b-981e-61af334beee4\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.568232 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e796fa8-fecd-474b-981e-61af334beee4-run-httpd\") pod \"1e796fa8-fecd-474b-981e-61af334beee4\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.568291 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-scripts\") pod \"1e796fa8-fecd-474b-981e-61af334beee4\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.568372 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-config-data\") pod \"1e796fa8-fecd-474b-981e-61af334beee4\" (UID: \"1e796fa8-fecd-474b-981e-61af334beee4\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.574126 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e796fa8-fecd-474b-981e-61af334beee4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1e796fa8-fecd-474b-981e-61af334beee4" (UID: "1e796fa8-fecd-474b-981e-61af334beee4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.574650 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e796fa8-fecd-474b-981e-61af334beee4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1e796fa8-fecd-474b-981e-61af334beee4" (UID: "1e796fa8-fecd-474b-981e-61af334beee4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.595961 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-scripts" (OuterVolumeSpecName: "scripts") pod "1e796fa8-fecd-474b-981e-61af334beee4" (UID: "1e796fa8-fecd-474b-981e-61af334beee4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.598315 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e796fa8-fecd-474b-981e-61af334beee4-kube-api-access-tp7t8" (OuterVolumeSpecName: "kube-api-access-tp7t8") pod "1e796fa8-fecd-474b-981e-61af334beee4" (UID: "1e796fa8-fecd-474b-981e-61af334beee4"). InnerVolumeSpecName "kube-api-access-tp7t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.676105 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdpcb\" (UniqueName: \"kubernetes.io/projected/ecd658b9-1c22-4778-afde-b392155b499a-kube-api-access-cdpcb\") pod \"ecd658b9-1c22-4778-afde-b392155b499a\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.677398 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49z8x\" (UniqueName: \"kubernetes.io/projected/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-kube-api-access-49z8x\") pod \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.677704 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-dns-svc\") pod \"ecd658b9-1c22-4778-afde-b392155b499a\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.677967 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-ovsdbserver-sb\") pod \"ecd658b9-1c22-4778-afde-b392155b499a\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.678446 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-ovsdbserver-nb\") pod \"ecd658b9-1c22-4778-afde-b392155b499a\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.678852 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-config\") pod \"ecd658b9-1c22-4778-afde-b392155b499a\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.682379 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-combined-ca-bundle\") pod \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.682836 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-scripts\") pod \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.683775 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-config-data\") pod \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\" (UID: \"ed3933a2-ea03-4354-bfa4-1ec240e12c9d\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.683951 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-dns-swift-storage-0\") pod \"ecd658b9-1c22-4778-afde-b392155b499a\" (UID: \"ecd658b9-1c22-4778-afde-b392155b499a\") " Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.682898 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-kube-api-access-49z8x" (OuterVolumeSpecName: "kube-api-access-49z8x") pod "ed3933a2-ea03-4354-bfa4-1ec240e12c9d" (UID: "ed3933a2-ea03-4354-bfa4-1ec240e12c9d"). InnerVolumeSpecName "kube-api-access-49z8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.692388 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecd658b9-1c22-4778-afde-b392155b499a-kube-api-access-cdpcb" (OuterVolumeSpecName: "kube-api-access-cdpcb") pod "ecd658b9-1c22-4778-afde-b392155b499a" (UID: "ecd658b9-1c22-4778-afde-b392155b499a"). InnerVolumeSpecName "kube-api-access-cdpcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.694721 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e796fa8-fecd-474b-981e-61af334beee4-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.697601 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp7t8\" (UniqueName: \"kubernetes.io/projected/1e796fa8-fecd-474b-981e-61af334beee4-kube-api-access-tp7t8\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.697737 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdpcb\" (UniqueName: \"kubernetes.io/projected/ecd658b9-1c22-4778-afde-b392155b499a-kube-api-access-cdpcb\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.697844 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49z8x\" (UniqueName: \"kubernetes.io/projected/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-kube-api-access-49z8x\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.697971 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e796fa8-fecd-474b-981e-61af334beee4-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.698075 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.696380 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-scripts" (OuterVolumeSpecName: "scripts") pod "ed3933a2-ea03-4354-bfa4-1ec240e12c9d" (UID: "ed3933a2-ea03-4354-bfa4-1ec240e12c9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.792063 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-config-data" (OuterVolumeSpecName: "config-data") pod "ed3933a2-ea03-4354-bfa4-1ec240e12c9d" (UID: "ed3933a2-ea03-4354-bfa4-1ec240e12c9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.800030 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.800055 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.804420 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1e796fa8-fecd-474b-981e-61af334beee4" (UID: "1e796fa8-fecd-474b-981e-61af334beee4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.841789 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed3933a2-ea03-4354-bfa4-1ec240e12c9d" (UID: "ed3933a2-ea03-4354-bfa4-1ec240e12c9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.848788 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ecd658b9-1c22-4778-afde-b392155b499a" (UID: "ecd658b9-1c22-4778-afde-b392155b499a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.861188 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ecd658b9-1c22-4778-afde-b392155b499a" (UID: "ecd658b9-1c22-4778-afde-b392155b499a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.869635 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ecd658b9-1c22-4778-afde-b392155b499a" (UID: "ecd658b9-1c22-4778-afde-b392155b499a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.887780 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-config" (OuterVolumeSpecName: "config") pod "ecd658b9-1c22-4778-afde-b392155b499a" (UID: "ecd658b9-1c22-4778-afde-b392155b499a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.891151 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ecd658b9-1c22-4778-afde-b392155b499a" (UID: "ecd658b9-1c22-4778-afde-b392155b499a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.897911 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e796fa8-fecd-474b-981e-61af334beee4" (UID: "1e796fa8-fecd-474b-981e-61af334beee4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.903030 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.903061 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.903074 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed3933a2-ea03-4354-bfa4-1ec240e12c9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.903086 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.903096 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.903104 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.903112 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.903120 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ecd658b9-1c22-4778-afde-b392155b499a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.927958 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-config-data" (OuterVolumeSpecName: "config-data") pod "1e796fa8-fecd-474b-981e-61af334beee4" (UID: "1e796fa8-fecd-474b-981e-61af334beee4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.949867 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.974721 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e796fa8-fecd-474b-981e-61af334beee4","Type":"ContainerDied","Data":"6ec59b427079cae51550ef1d171d721d3e5aad1fd0a9606a114808a2962647f8"} Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.974790 4832 scope.go:117] "RemoveContainer" containerID="031ee0e51f42f30312df24f221d8d9c06a69fe226047e27108c774e513c9107c" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.974989 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.988371 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d861870-773c-4994-b599-62adec02a99a","Type":"ContainerStarted","Data":"2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9"} Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.996929 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-q287c" Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.997704 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-q287c" event={"ID":"ecd658b9-1c22-4778-afde-b392155b499a","Type":"ContainerDied","Data":"2d09df4e608caaaf5f301323a3b61f91707a623db14f880568ef03f3dbb5d348"} Oct 02 18:44:46 crc kubenswrapper[4832]: I1002 18:44:46.999409 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"071fea74-b76f-4aa3-b6da-77c876fb1234","Type":"ContainerStarted","Data":"c5ad4337b35e2d469a9ee8f0effcfe5dbccc2887707ab66887ed3834f531a8d5"} Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.004823 4832 scope.go:117] "RemoveContainer" containerID="9cea6ba09ce766a77e14f9ae90fd2d71d92226a7d026140efa4d234b484c6570" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.006568 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e796fa8-fecd-474b-981e-61af334beee4-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.008436 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kr474" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.009353 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kr474" event={"ID":"ed3933a2-ea03-4354-bfa4-1ec240e12c9d","Type":"ContainerDied","Data":"c79bbbc54e078b0ddc724d0a00073fa4dfee393097bb73e500a987eaf53350a7"} Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.009410 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c79bbbc54e078b0ddc724d0a00073fa4dfee393097bb73e500a987eaf53350a7" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.034155 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.050562 4832 scope.go:117] "RemoveContainer" containerID="de1b04afa2931ad448f7bf9605599f109acc91870715d38f68605e0c677f884d" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.052881 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.065690 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:44:47 crc kubenswrapper[4832]: E1002 18:44:47.069175 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="ceilometer-notification-agent" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.069201 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="ceilometer-notification-agent" Oct 02 18:44:47 crc kubenswrapper[4832]: E1002 18:44:47.069218 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd658b9-1c22-4778-afde-b392155b499a" containerName="init" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.069224 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd658b9-1c22-4778-afde-b392155b499a" containerName="init" Oct 02 18:44:47 crc kubenswrapper[4832]: E1002 18:44:47.069259 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed3933a2-ea03-4354-bfa4-1ec240e12c9d" containerName="nova-manage" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.069266 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed3933a2-ea03-4354-bfa4-1ec240e12c9d" containerName="nova-manage" Oct 02 18:44:47 crc kubenswrapper[4832]: E1002 18:44:47.069300 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="sg-core" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.069307 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="sg-core" Oct 02 18:44:47 crc kubenswrapper[4832]: E1002 18:44:47.069326 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="proxy-httpd" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.069332 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="proxy-httpd" Oct 02 18:44:47 crc kubenswrapper[4832]: E1002 18:44:47.069352 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd658b9-1c22-4778-afde-b392155b499a" containerName="dnsmasq-dns" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.069358 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd658b9-1c22-4778-afde-b392155b499a" containerName="dnsmasq-dns" Oct 02 18:44:47 crc kubenswrapper[4832]: E1002 18:44:47.069371 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="ceilometer-central-agent" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.069378 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="ceilometer-central-agent" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.069673 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="sg-core" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.069682 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed3933a2-ea03-4354-bfa4-1ec240e12c9d" containerName="nova-manage" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.069696 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="proxy-httpd" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.069712 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="ceilometer-central-agent" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.069748 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e796fa8-fecd-474b-981e-61af334beee4" containerName="ceilometer-notification-agent" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.069766 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecd658b9-1c22-4778-afde-b392155b499a" containerName="dnsmasq-dns" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.076382 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.079213 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.079410 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.083396 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-q287c"] Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.104995 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-q287c"] Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.108248 4832 scope.go:117] "RemoveContainer" containerID="da09935512be5e40b8efd16470d6d0ac96ae59fe10ea93118bec628b66838315" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.118817 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.141868 4832 scope.go:117] "RemoveContainer" containerID="7e538ab23eed35a72ba35c848b06cc5c8bc855896563d008d0a6e697c2c7a86f" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.187221 4832 scope.go:117] "RemoveContainer" containerID="09b11ad0b68032e8eea590922dfef1b8b899d2bad673182e297e52283b4285f0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.212919 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3834871-a53a-40ca-8cae-a908ebe9908b-run-httpd\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.213041 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.213310 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.213587 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3834871-a53a-40ca-8cae-a908ebe9908b-log-httpd\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.213679 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4qwz\" (UniqueName: \"kubernetes.io/projected/a3834871-a53a-40ca-8cae-a908ebe9908b-kube-api-access-l4qwz\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.213840 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-config-data\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.213873 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-scripts\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.242783 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e796fa8-fecd-474b-981e-61af334beee4" path="/var/lib/kubelet/pods/1e796fa8-fecd-474b-981e-61af334beee4/volumes" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.244594 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b55c9826-fe7e-4a17-800c-6e45446af3a2" path="/var/lib/kubelet/pods/b55c9826-fe7e-4a17-800c-6e45446af3a2/volumes" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.246834 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecd658b9-1c22-4778-afde-b392155b499a" path="/var/lib/kubelet/pods/ecd658b9-1c22-4778-afde-b392155b499a/volumes" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.316100 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.316298 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3834871-a53a-40ca-8cae-a908ebe9908b-log-httpd\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.316377 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4qwz\" (UniqueName: \"kubernetes.io/projected/a3834871-a53a-40ca-8cae-a908ebe9908b-kube-api-access-l4qwz\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.316440 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-scripts\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.316462 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-config-data\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.316539 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3834871-a53a-40ca-8cae-a908ebe9908b-run-httpd\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.316603 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.316964 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3834871-a53a-40ca-8cae-a908ebe9908b-log-httpd\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.317493 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3834871-a53a-40ca-8cae-a908ebe9908b-run-httpd\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.321214 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.321428 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-config-data\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.322851 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.324700 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-scripts\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.332921 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4qwz\" (UniqueName: \"kubernetes.io/projected/a3834871-a53a-40ca-8cae-a908ebe9908b-kube-api-access-l4qwz\") pod \"ceilometer-0\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.422873 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.689085 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.689593 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fe1d8845-2d10-4a17-b10a-44ea05f9671d" containerName="nova-api-log" containerID="cri-o://16c0ab1b19c273dbce87502b663e0d495bd5210b1edcb021665d3a896fb75e10" gracePeriod=30 Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.690052 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fe1d8845-2d10-4a17-b10a-44ea05f9671d" containerName="nova-api-api" containerID="cri-o://386860b1230a30fecc03bcb0b6fad63d700dd70a271f52fd5082f99499c4c9fb" gracePeriod=30 Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.711808 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:44:47 crc kubenswrapper[4832]: I1002 18:44:47.734699 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:48 crc kubenswrapper[4832]: I1002 18:44:47.959566 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:44:48 crc kubenswrapper[4832]: I1002 18:44:48.029864 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"071fea74-b76f-4aa3-b6da-77c876fb1234","Type":"ContainerStarted","Data":"139346285b041ba1f394b0eb862b7270f7a5102ee952f988bc955c9e9ca798ad"} Oct 02 18:44:48 crc kubenswrapper[4832]: I1002 18:44:48.029912 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"071fea74-b76f-4aa3-b6da-77c876fb1234","Type":"ContainerStarted","Data":"08654b845118ee7820e068e73403edd6cd2e1f9fb8f2ad1ab70831dea6af08e7"} Oct 02 18:44:48 crc kubenswrapper[4832]: I1002 18:44:48.035054 4832 generic.go:334] "Generic (PLEG): container finished" podID="fe1d8845-2d10-4a17-b10a-44ea05f9671d" containerID="16c0ab1b19c273dbce87502b663e0d495bd5210b1edcb021665d3a896fb75e10" exitCode=143 Oct 02 18:44:48 crc kubenswrapper[4832]: I1002 18:44:48.035121 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe1d8845-2d10-4a17-b10a-44ea05f9671d","Type":"ContainerDied","Data":"16c0ab1b19c273dbce87502b663e0d495bd5210b1edcb021665d3a896fb75e10"} Oct 02 18:44:48 crc kubenswrapper[4832]: I1002 18:44:48.038229 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0847b290-25a2-406a-8e3c-31952edbd846" containerName="nova-scheduler-scheduler" containerID="cri-o://d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c" gracePeriod=30 Oct 02 18:44:48 crc kubenswrapper[4832]: I1002 18:44:48.038573 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3834871-a53a-40ca-8cae-a908ebe9908b","Type":"ContainerStarted","Data":"849626be761fd22fb55c1b7e582eaf69b377f7bd1ceb5155fcc837429a978954"} Oct 02 18:44:48 crc kubenswrapper[4832]: I1002 18:44:48.049771 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.049750474 podStartE2EDuration="3.049750474s" podCreationTimestamp="2025-10-02 18:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:44:48.046838424 +0000 UTC m=+1445.016281296" watchObservedRunningTime="2025-10-02 18:44:48.049750474 +0000 UTC m=+1445.019193346" Oct 02 18:44:48 crc kubenswrapper[4832]: E1002 18:44:48.979157 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 18:44:48 crc kubenswrapper[4832]: E1002 18:44:48.981126 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 18:44:48 crc kubenswrapper[4832]: E1002 18:44:48.984699 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 18:44:48 crc kubenswrapper[4832]: E1002 18:44:48.984782 4832 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0847b290-25a2-406a-8e3c-31952edbd846" containerName="nova-scheduler-scheduler" Oct 02 18:44:49 crc kubenswrapper[4832]: I1002 18:44:49.050093 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="071fea74-b76f-4aa3-b6da-77c876fb1234" containerName="nova-metadata-log" containerID="cri-o://08654b845118ee7820e068e73403edd6cd2e1f9fb8f2ad1ab70831dea6af08e7" gracePeriod=30 Oct 02 18:44:49 crc kubenswrapper[4832]: I1002 18:44:49.050118 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="071fea74-b76f-4aa3-b6da-77c876fb1234" containerName="nova-metadata-metadata" containerID="cri-o://139346285b041ba1f394b0eb862b7270f7a5102ee952f988bc955c9e9ca798ad" gracePeriod=30 Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.076518 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d861870-773c-4994-b599-62adec02a99a","Type":"ContainerStarted","Data":"46cc5836ad24974c07c7e9af5baae0901ad72a9722c713cb60ba84308923ddff"} Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.076600 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-api" containerID="cri-o://5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26" gracePeriod=30 Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.076636 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-notifier" containerID="cri-o://2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9" gracePeriod=30 Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.076657 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-evaluator" containerID="cri-o://58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba" gracePeriod=30 Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.076674 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-listener" containerID="cri-o://46cc5836ad24974c07c7e9af5baae0901ad72a9722c713cb60ba84308923ddff" gracePeriod=30 Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.090056 4832 generic.go:334] "Generic (PLEG): container finished" podID="071fea74-b76f-4aa3-b6da-77c876fb1234" containerID="139346285b041ba1f394b0eb862b7270f7a5102ee952f988bc955c9e9ca798ad" exitCode=0 Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.090092 4832 generic.go:334] "Generic (PLEG): container finished" podID="071fea74-b76f-4aa3-b6da-77c876fb1234" containerID="08654b845118ee7820e068e73403edd6cd2e1f9fb8f2ad1ab70831dea6af08e7" exitCode=143 Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.090116 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"071fea74-b76f-4aa3-b6da-77c876fb1234","Type":"ContainerDied","Data":"139346285b041ba1f394b0eb862b7270f7a5102ee952f988bc955c9e9ca798ad"} Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.090171 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"071fea74-b76f-4aa3-b6da-77c876fb1234","Type":"ContainerDied","Data":"08654b845118ee7820e068e73403edd6cd2e1f9fb8f2ad1ab70831dea6af08e7"} Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.102397 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3834871-a53a-40ca-8cae-a908ebe9908b","Type":"ContainerStarted","Data":"ce45f741a7d3233f464adb945e2cba071eb3292ad3725582be9b3b052f6851e6"} Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.117065 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.602435788 podStartE2EDuration="17.117045826s" podCreationTimestamp="2025-10-02 18:44:33 +0000 UTC" firstStartedPulling="2025-10-02 18:44:35.187792311 +0000 UTC m=+1432.157235183" lastFinishedPulling="2025-10-02 18:44:48.702402349 +0000 UTC m=+1445.671845221" observedRunningTime="2025-10-02 18:44:50.101744714 +0000 UTC m=+1447.071187586" watchObservedRunningTime="2025-10-02 18:44:50.117045826 +0000 UTC m=+1447.086488698" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.124646 4832 generic.go:334] "Generic (PLEG): container finished" podID="0847b290-25a2-406a-8e3c-31952edbd846" containerID="d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c" exitCode=0 Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.124702 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0847b290-25a2-406a-8e3c-31952edbd846","Type":"ContainerDied","Data":"d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c"} Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.324490 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.355175 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.414070 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-config-data\") pod \"071fea74-b76f-4aa3-b6da-77c876fb1234\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.414213 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/071fea74-b76f-4aa3-b6da-77c876fb1234-logs\") pod \"071fea74-b76f-4aa3-b6da-77c876fb1234\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.414748 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/071fea74-b76f-4aa3-b6da-77c876fb1234-logs" (OuterVolumeSpecName: "logs") pod "071fea74-b76f-4aa3-b6da-77c876fb1234" (UID: "071fea74-b76f-4aa3-b6da-77c876fb1234"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.414818 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4tw5\" (UniqueName: \"kubernetes.io/projected/071fea74-b76f-4aa3-b6da-77c876fb1234-kube-api-access-z4tw5\") pod \"071fea74-b76f-4aa3-b6da-77c876fb1234\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.415162 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-nova-metadata-tls-certs\") pod \"071fea74-b76f-4aa3-b6da-77c876fb1234\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.415296 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-combined-ca-bundle\") pod \"071fea74-b76f-4aa3-b6da-77c876fb1234\" (UID: \"071fea74-b76f-4aa3-b6da-77c876fb1234\") " Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.415896 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/071fea74-b76f-4aa3-b6da-77c876fb1234-logs\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.423613 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/071fea74-b76f-4aa3-b6da-77c876fb1234-kube-api-access-z4tw5" (OuterVolumeSpecName: "kube-api-access-z4tw5") pod "071fea74-b76f-4aa3-b6da-77c876fb1234" (UID: "071fea74-b76f-4aa3-b6da-77c876fb1234"). InnerVolumeSpecName "kube-api-access-z4tw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.465412 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-config-data" (OuterVolumeSpecName: "config-data") pod "071fea74-b76f-4aa3-b6da-77c876fb1234" (UID: "071fea74-b76f-4aa3-b6da-77c876fb1234"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.469034 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "071fea74-b76f-4aa3-b6da-77c876fb1234" (UID: "071fea74-b76f-4aa3-b6da-77c876fb1234"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.483679 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "071fea74-b76f-4aa3-b6da-77c876fb1234" (UID: "071fea74-b76f-4aa3-b6da-77c876fb1234"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.517076 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0847b290-25a2-406a-8e3c-31952edbd846-combined-ca-bundle\") pod \"0847b290-25a2-406a-8e3c-31952edbd846\" (UID: \"0847b290-25a2-406a-8e3c-31952edbd846\") " Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.517183 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0847b290-25a2-406a-8e3c-31952edbd846-config-data\") pod \"0847b290-25a2-406a-8e3c-31952edbd846\" (UID: \"0847b290-25a2-406a-8e3c-31952edbd846\") " Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.517243 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhd6h\" (UniqueName: \"kubernetes.io/projected/0847b290-25a2-406a-8e3c-31952edbd846-kube-api-access-bhd6h\") pod \"0847b290-25a2-406a-8e3c-31952edbd846\" (UID: \"0847b290-25a2-406a-8e3c-31952edbd846\") " Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.517718 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.517740 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4tw5\" (UniqueName: \"kubernetes.io/projected/071fea74-b76f-4aa3-b6da-77c876fb1234-kube-api-access-z4tw5\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.517750 4832 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.517759 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/071fea74-b76f-4aa3-b6da-77c876fb1234-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.520217 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0847b290-25a2-406a-8e3c-31952edbd846-kube-api-access-bhd6h" (OuterVolumeSpecName: "kube-api-access-bhd6h") pod "0847b290-25a2-406a-8e3c-31952edbd846" (UID: "0847b290-25a2-406a-8e3c-31952edbd846"). InnerVolumeSpecName "kube-api-access-bhd6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.556175 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0847b290-25a2-406a-8e3c-31952edbd846-config-data" (OuterVolumeSpecName: "config-data") pod "0847b290-25a2-406a-8e3c-31952edbd846" (UID: "0847b290-25a2-406a-8e3c-31952edbd846"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.558591 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0847b290-25a2-406a-8e3c-31952edbd846-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0847b290-25a2-406a-8e3c-31952edbd846" (UID: "0847b290-25a2-406a-8e3c-31952edbd846"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.620049 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0847b290-25a2-406a-8e3c-31952edbd846-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.620089 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0847b290-25a2-406a-8e3c-31952edbd846-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:50 crc kubenswrapper[4832]: I1002 18:44:50.620102 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhd6h\" (UniqueName: \"kubernetes.io/projected/0847b290-25a2-406a-8e3c-31952edbd846-kube-api-access-bhd6h\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.141502 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"071fea74-b76f-4aa3-b6da-77c876fb1234","Type":"ContainerDied","Data":"c5ad4337b35e2d469a9ee8f0effcfe5dbccc2887707ab66887ed3834f531a8d5"} Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.141780 4832 scope.go:117] "RemoveContainer" containerID="139346285b041ba1f394b0eb862b7270f7a5102ee952f988bc955c9e9ca798ad" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.141521 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.158316 4832 generic.go:334] "Generic (PLEG): container finished" podID="fe1d8845-2d10-4a17-b10a-44ea05f9671d" containerID="386860b1230a30fecc03bcb0b6fad63d700dd70a271f52fd5082f99499c4c9fb" exitCode=0 Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.158402 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe1d8845-2d10-4a17-b10a-44ea05f9671d","Type":"ContainerDied","Data":"386860b1230a30fecc03bcb0b6fad63d700dd70a271f52fd5082f99499c4c9fb"} Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.177240 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3834871-a53a-40ca-8cae-a908ebe9908b","Type":"ContainerStarted","Data":"20ec86f1e91485f1559eef17a29a1560b3c59d5d0b95e05e6dfc883acebcc2f9"} Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.181654 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.182980 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0847b290-25a2-406a-8e3c-31952edbd846","Type":"ContainerDied","Data":"bb47efe3239d62553afdab9e6369edcf08953eded46b79974a0902b50aea755c"} Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.183054 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.191130 4832 scope.go:117] "RemoveContainer" containerID="08654b845118ee7820e068e73403edd6cd2e1f9fb8f2ad1ab70831dea6af08e7" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.192000 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.192475 4832 generic.go:334] "Generic (PLEG): container finished" podID="88751a34-122e-469a-955d-d91072955b66" containerID="e19f98130721a44029df32a8019885e9a7a6092e0ba9be01c2aa810338882db8" exitCode=0 Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.192532 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zznjp" event={"ID":"88751a34-122e-469a-955d-d91072955b66","Type":"ContainerDied","Data":"e19f98130721a44029df32a8019885e9a7a6092e0ba9be01c2aa810338882db8"} Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.202808 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:51 crc kubenswrapper[4832]: E1002 18:44:51.203311 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071fea74-b76f-4aa3-b6da-77c876fb1234" containerName="nova-metadata-metadata" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.203327 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="071fea74-b76f-4aa3-b6da-77c876fb1234" containerName="nova-metadata-metadata" Oct 02 18:44:51 crc kubenswrapper[4832]: E1002 18:44:51.203382 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071fea74-b76f-4aa3-b6da-77c876fb1234" containerName="nova-metadata-log" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.203387 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="071fea74-b76f-4aa3-b6da-77c876fb1234" containerName="nova-metadata-log" Oct 02 18:44:51 crc kubenswrapper[4832]: E1002 18:44:51.203397 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0847b290-25a2-406a-8e3c-31952edbd846" containerName="nova-scheduler-scheduler" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.203404 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0847b290-25a2-406a-8e3c-31952edbd846" containerName="nova-scheduler-scheduler" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.203663 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="071fea74-b76f-4aa3-b6da-77c876fb1234" containerName="nova-metadata-log" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.203681 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="071fea74-b76f-4aa3-b6da-77c876fb1234" containerName="nova-metadata-metadata" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.203689 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0847b290-25a2-406a-8e3c-31952edbd846" containerName="nova-scheduler-scheduler" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.206574 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.208720 4832 generic.go:334] "Generic (PLEG): container finished" podID="9d861870-773c-4994-b599-62adec02a99a" containerID="58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba" exitCode=0 Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.208749 4832 generic.go:334] "Generic (PLEG): container finished" podID="9d861870-773c-4994-b599-62adec02a99a" containerID="5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26" exitCode=0 Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.208767 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d861870-773c-4994-b599-62adec02a99a","Type":"ContainerDied","Data":"58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba"} Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.208786 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d861870-773c-4994-b599-62adec02a99a","Type":"ContainerDied","Data":"5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26"} Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.211974 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.212077 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.221208 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.267688 4832 scope.go:117] "RemoveContainer" containerID="d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.270246 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="071fea74-b76f-4aa3-b6da-77c876fb1234" path="/var/lib/kubelet/pods/071fea74-b76f-4aa3-b6da-77c876fb1234/volumes" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.290458 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.300836 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.314669 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.316968 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.319641 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.328046 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.342783 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.342894 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-config-data\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.342917 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.342934 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k56b\" (UniqueName: \"kubernetes.io/projected/e8b5344e-00d4-4ad7-9dd1-176828d155cc-kube-api-access-7k56b\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.343029 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b5344e-00d4-4ad7-9dd1-176828d155cc-logs\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.444741 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kll92\" (UniqueName: \"kubernetes.io/projected/8377df42-e617-42e2-ace4-d085f917e879-kube-api-access-kll92\") pod \"nova-scheduler-0\" (UID: \"8377df42-e617-42e2-ace4-d085f917e879\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.450105 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-config-data\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.450145 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.450179 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k56b\" (UniqueName: \"kubernetes.io/projected/e8b5344e-00d4-4ad7-9dd1-176828d155cc-kube-api-access-7k56b\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.450302 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8377df42-e617-42e2-ace4-d085f917e879-config-data\") pod \"nova-scheduler-0\" (UID: \"8377df42-e617-42e2-ace4-d085f917e879\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.450416 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b5344e-00d4-4ad7-9dd1-176828d155cc-logs\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.450648 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.450771 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8377df42-e617-42e2-ace4-d085f917e879-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8377df42-e617-42e2-ace4-d085f917e879\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.452621 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b5344e-00d4-4ad7-9dd1-176828d155cc-logs\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.461765 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.462296 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.462885 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-config-data\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.476939 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k56b\" (UniqueName: \"kubernetes.io/projected/e8b5344e-00d4-4ad7-9dd1-176828d155cc-kube-api-access-7k56b\") pod \"nova-metadata-0\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.552627 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8377df42-e617-42e2-ace4-d085f917e879-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8377df42-e617-42e2-ace4-d085f917e879\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.552682 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kll92\" (UniqueName: \"kubernetes.io/projected/8377df42-e617-42e2-ace4-d085f917e879-kube-api-access-kll92\") pod \"nova-scheduler-0\" (UID: \"8377df42-e617-42e2-ace4-d085f917e879\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.552785 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8377df42-e617-42e2-ace4-d085f917e879-config-data\") pod \"nova-scheduler-0\" (UID: \"8377df42-e617-42e2-ace4-d085f917e879\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.557238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8377df42-e617-42e2-ace4-d085f917e879-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8377df42-e617-42e2-ace4-d085f917e879\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.559383 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.561246 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8377df42-e617-42e2-ace4-d085f917e879-config-data\") pod \"nova-scheduler-0\" (UID: \"8377df42-e617-42e2-ace4-d085f917e879\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.568942 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kll92\" (UniqueName: \"kubernetes.io/projected/8377df42-e617-42e2-ace4-d085f917e879-kube-api-access-kll92\") pod \"nova-scheduler-0\" (UID: \"8377df42-e617-42e2-ace4-d085f917e879\") " pod="openstack/nova-scheduler-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.654855 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d8845-2d10-4a17-b10a-44ea05f9671d-logs\") pod \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.655080 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qcd8\" (UniqueName: \"kubernetes.io/projected/fe1d8845-2d10-4a17-b10a-44ea05f9671d-kube-api-access-7qcd8\") pod \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.655244 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d8845-2d10-4a17-b10a-44ea05f9671d-combined-ca-bundle\") pod \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.655430 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d8845-2d10-4a17-b10a-44ea05f9671d-config-data\") pod \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\" (UID: \"fe1d8845-2d10-4a17-b10a-44ea05f9671d\") " Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.655433 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe1d8845-2d10-4a17-b10a-44ea05f9671d-logs" (OuterVolumeSpecName: "logs") pod "fe1d8845-2d10-4a17-b10a-44ea05f9671d" (UID: "fe1d8845-2d10-4a17-b10a-44ea05f9671d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.656619 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe1d8845-2d10-4a17-b10a-44ea05f9671d-logs\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.660629 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe1d8845-2d10-4a17-b10a-44ea05f9671d-kube-api-access-7qcd8" (OuterVolumeSpecName: "kube-api-access-7qcd8") pod "fe1d8845-2d10-4a17-b10a-44ea05f9671d" (UID: "fe1d8845-2d10-4a17-b10a-44ea05f9671d"). InnerVolumeSpecName "kube-api-access-7qcd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.684441 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1d8845-2d10-4a17-b10a-44ea05f9671d-config-data" (OuterVolumeSpecName: "config-data") pod "fe1d8845-2d10-4a17-b10a-44ea05f9671d" (UID: "fe1d8845-2d10-4a17-b10a-44ea05f9671d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.684590 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe1d8845-2d10-4a17-b10a-44ea05f9671d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe1d8845-2d10-4a17-b10a-44ea05f9671d" (UID: "fe1d8845-2d10-4a17-b10a-44ea05f9671d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.709553 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.727476 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.758286 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1d8845-2d10-4a17-b10a-44ea05f9671d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.758578 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1d8845-2d10-4a17-b10a-44ea05f9671d-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:51 crc kubenswrapper[4832]: I1002 18:44:51.758588 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qcd8\" (UniqueName: \"kubernetes.io/projected/fe1d8845-2d10-4a17-b10a-44ea05f9671d-kube-api-access-7qcd8\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.254590 4832 generic.go:334] "Generic (PLEG): container finished" podID="9d861870-773c-4994-b599-62adec02a99a" containerID="2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9" exitCode=0 Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.254813 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d861870-773c-4994-b599-62adec02a99a","Type":"ContainerDied","Data":"2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9"} Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.282567 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.282557 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe1d8845-2d10-4a17-b10a-44ea05f9671d","Type":"ContainerDied","Data":"1250d72a3a090ea49871efb5919c7ab31781fe1a5d522735d6c1f0ccc2c48007"} Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.285955 4832 scope.go:117] "RemoveContainer" containerID="386860b1230a30fecc03bcb0b6fad63d700dd70a271f52fd5082f99499c4c9fb" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.309939 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3834871-a53a-40ca-8cae-a908ebe9908b","Type":"ContainerStarted","Data":"c6f89450ccd21aa3ebae9792253c06ddf87a1d4b5b8d9c7664404ce48c8f5d97"} Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.389129 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.423423 4832 scope.go:117] "RemoveContainer" containerID="16c0ab1b19c273dbce87502b663e0d495bd5210b1edcb021665d3a896fb75e10" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.474152 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.493496 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.521044 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.573537 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 02 18:44:52 crc kubenswrapper[4832]: E1002 18:44:52.574321 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1d8845-2d10-4a17-b10a-44ea05f9671d" containerName="nova-api-api" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.574337 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1d8845-2d10-4a17-b10a-44ea05f9671d" containerName="nova-api-api" Oct 02 18:44:52 crc kubenswrapper[4832]: E1002 18:44:52.574391 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1d8845-2d10-4a17-b10a-44ea05f9671d" containerName="nova-api-log" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.574400 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1d8845-2d10-4a17-b10a-44ea05f9671d" containerName="nova-api-log" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.574710 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1d8845-2d10-4a17-b10a-44ea05f9671d" containerName="nova-api-log" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.574735 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1d8845-2d10-4a17-b10a-44ea05f9671d" containerName="nova-api-api" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.576562 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.582425 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.583939 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.618128 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0952326-4649-4cb2-b2e3-049c3f13d3f0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " pod="openstack/nova-api-0" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.618234 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl7xf\" (UniqueName: \"kubernetes.io/projected/d0952326-4649-4cb2-b2e3-049c3f13d3f0-kube-api-access-nl7xf\") pod \"nova-api-0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " pod="openstack/nova-api-0" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.618589 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0952326-4649-4cb2-b2e3-049c3f13d3f0-config-data\") pod \"nova-api-0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " pod="openstack/nova-api-0" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.618642 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0952326-4649-4cb2-b2e3-049c3f13d3f0-logs\") pod \"nova-api-0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " pod="openstack/nova-api-0" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.721602 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl7xf\" (UniqueName: \"kubernetes.io/projected/d0952326-4649-4cb2-b2e3-049c3f13d3f0-kube-api-access-nl7xf\") pod \"nova-api-0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " pod="openstack/nova-api-0" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.721957 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0952326-4649-4cb2-b2e3-049c3f13d3f0-config-data\") pod \"nova-api-0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " pod="openstack/nova-api-0" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.722017 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0952326-4649-4cb2-b2e3-049c3f13d3f0-logs\") pod \"nova-api-0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " pod="openstack/nova-api-0" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.722170 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0952326-4649-4cb2-b2e3-049c3f13d3f0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " pod="openstack/nova-api-0" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.722464 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0952326-4649-4cb2-b2e3-049c3f13d3f0-logs\") pod \"nova-api-0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " pod="openstack/nova-api-0" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.735911 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0952326-4649-4cb2-b2e3-049c3f13d3f0-config-data\") pod \"nova-api-0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " pod="openstack/nova-api-0" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.738053 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0952326-4649-4cb2-b2e3-049c3f13d3f0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " pod="openstack/nova-api-0" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.750585 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl7xf\" (UniqueName: \"kubernetes.io/projected/d0952326-4649-4cb2-b2e3-049c3f13d3f0-kube-api-access-nl7xf\") pod \"nova-api-0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " pod="openstack/nova-api-0" Oct 02 18:44:52 crc kubenswrapper[4832]: I1002 18:44:52.954907 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.077885 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.132103 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwxn7\" (UniqueName: \"kubernetes.io/projected/88751a34-122e-469a-955d-d91072955b66-kube-api-access-xwxn7\") pod \"88751a34-122e-469a-955d-d91072955b66\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.132542 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-combined-ca-bundle\") pod \"88751a34-122e-469a-955d-d91072955b66\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.132704 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-scripts\") pod \"88751a34-122e-469a-955d-d91072955b66\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.132772 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-config-data\") pod \"88751a34-122e-469a-955d-d91072955b66\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.140485 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-scripts" (OuterVolumeSpecName: "scripts") pod "88751a34-122e-469a-955d-d91072955b66" (UID: "88751a34-122e-469a-955d-d91072955b66"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.143405 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88751a34-122e-469a-955d-d91072955b66-kube-api-access-xwxn7" (OuterVolumeSpecName: "kube-api-access-xwxn7") pod "88751a34-122e-469a-955d-d91072955b66" (UID: "88751a34-122e-469a-955d-d91072955b66"). InnerVolumeSpecName "kube-api-access-xwxn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:44:53 crc kubenswrapper[4832]: E1002 18:44:53.176617 4832 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-combined-ca-bundle podName:88751a34-122e-469a-955d-d91072955b66 nodeName:}" failed. No retries permitted until 2025-10-02 18:44:53.676587963 +0000 UTC m=+1450.646030835 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-combined-ca-bundle") pod "88751a34-122e-469a-955d-d91072955b66" (UID: "88751a34-122e-469a-955d-d91072955b66") : error deleting /var/lib/kubelet/pods/88751a34-122e-469a-955d-d91072955b66/volume-subpaths: remove /var/lib/kubelet/pods/88751a34-122e-469a-955d-d91072955b66/volume-subpaths: no such file or directory Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.179854 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-config-data" (OuterVolumeSpecName: "config-data") pod "88751a34-122e-469a-955d-d91072955b66" (UID: "88751a34-122e-469a-955d-d91072955b66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.235138 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwxn7\" (UniqueName: \"kubernetes.io/projected/88751a34-122e-469a-955d-d91072955b66-kube-api-access-xwxn7\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.235173 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.235182 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.238238 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0847b290-25a2-406a-8e3c-31952edbd846" path="/var/lib/kubelet/pods/0847b290-25a2-406a-8e3c-31952edbd846/volumes" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.239049 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe1d8845-2d10-4a17-b10a-44ea05f9671d" path="/var/lib/kubelet/pods/fe1d8845-2d10-4a17-b10a-44ea05f9671d/volumes" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.288152 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 02 18:44:53 crc kubenswrapper[4832]: E1002 18:44:53.289209 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88751a34-122e-469a-955d-d91072955b66" containerName="nova-cell1-conductor-db-sync" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.289227 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="88751a34-122e-469a-955d-d91072955b66" containerName="nova-cell1-conductor-db-sync" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.289603 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="88751a34-122e-469a-955d-d91072955b66" containerName="nova-cell1-conductor-db-sync" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.292578 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.328423 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.339585 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh7jb\" (UniqueName: \"kubernetes.io/projected/8657ca8f-f47b-476a-96f0-b5f5c313cb61-kube-api-access-vh7jb\") pod \"nova-cell1-conductor-0\" (UID: \"8657ca8f-f47b-476a-96f0-b5f5c313cb61\") " pod="openstack/nova-cell1-conductor-0" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.339774 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8657ca8f-f47b-476a-96f0-b5f5c313cb61-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8657ca8f-f47b-476a-96f0-b5f5c313cb61\") " pod="openstack/nova-cell1-conductor-0" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.339884 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8657ca8f-f47b-476a-96f0-b5f5c313cb61-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8657ca8f-f47b-476a-96f0-b5f5c313cb61\") " pod="openstack/nova-cell1-conductor-0" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.357368 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3834871-a53a-40ca-8cae-a908ebe9908b","Type":"ContainerStarted","Data":"e8eeba79cf2cc3a8aef35105c8b1a6bc746518c179a315728eb5d769d31d5a14"} Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.357431 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.362068 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zznjp" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.362601 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zznjp" event={"ID":"88751a34-122e-469a-955d-d91072955b66","Type":"ContainerDied","Data":"5e38f70d423123e3f08667789ee3ffde916ddf05db59ebb4371ef66e751ff326"} Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.362630 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e38f70d423123e3f08667789ee3ffde916ddf05db59ebb4371ef66e751ff326" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.367876 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8377df42-e617-42e2-ace4-d085f917e879","Type":"ContainerStarted","Data":"6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132"} Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.367938 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8377df42-e617-42e2-ace4-d085f917e879","Type":"ContainerStarted","Data":"aa516adc4fe1ecc60b641d806648b282360f24fec7546e74e0dff554ad3ad0ab"} Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.370949 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8b5344e-00d4-4ad7-9dd1-176828d155cc","Type":"ContainerStarted","Data":"ed8497c8a30f8959beec1ddecf48fd264cf70e1ce42f54f6c86c674196ad9640"} Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.371024 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8b5344e-00d4-4ad7-9dd1-176828d155cc","Type":"ContainerStarted","Data":"588d8197cd3f09be3d74fdd818dfb51c766379cbee324af3407a034d9f96f36c"} Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.371040 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8b5344e-00d4-4ad7-9dd1-176828d155cc","Type":"ContainerStarted","Data":"eaf5b3cdf08853abc2244a8d5074b90eb93962614ec8d019a9cd727b12ef5fa8"} Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.392125 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.3157424070000001 podStartE2EDuration="6.392104701s" podCreationTimestamp="2025-10-02 18:44:47 +0000 UTC" firstStartedPulling="2025-10-02 18:44:47.966439897 +0000 UTC m=+1444.935882769" lastFinishedPulling="2025-10-02 18:44:53.042802191 +0000 UTC m=+1450.012245063" observedRunningTime="2025-10-02 18:44:53.374954713 +0000 UTC m=+1450.344397585" watchObservedRunningTime="2025-10-02 18:44:53.392104701 +0000 UTC m=+1450.361547573" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.398362 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.398345563 podStartE2EDuration="2.398345563s" podCreationTimestamp="2025-10-02 18:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:44:53.392694109 +0000 UTC m=+1450.362136981" watchObservedRunningTime="2025-10-02 18:44:53.398345563 +0000 UTC m=+1450.367788435" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.437673 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.437648824 podStartE2EDuration="2.437648824s" podCreationTimestamp="2025-10-02 18:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:44:53.420678891 +0000 UTC m=+1450.390121763" watchObservedRunningTime="2025-10-02 18:44:53.437648824 +0000 UTC m=+1450.407091696" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.441698 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh7jb\" (UniqueName: \"kubernetes.io/projected/8657ca8f-f47b-476a-96f0-b5f5c313cb61-kube-api-access-vh7jb\") pod \"nova-cell1-conductor-0\" (UID: \"8657ca8f-f47b-476a-96f0-b5f5c313cb61\") " pod="openstack/nova-cell1-conductor-0" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.441839 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8657ca8f-f47b-476a-96f0-b5f5c313cb61-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8657ca8f-f47b-476a-96f0-b5f5c313cb61\") " pod="openstack/nova-cell1-conductor-0" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.441909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8657ca8f-f47b-476a-96f0-b5f5c313cb61-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8657ca8f-f47b-476a-96f0-b5f5c313cb61\") " pod="openstack/nova-cell1-conductor-0" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.447614 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8657ca8f-f47b-476a-96f0-b5f5c313cb61-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8657ca8f-f47b-476a-96f0-b5f5c313cb61\") " pod="openstack/nova-cell1-conductor-0" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.447817 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8657ca8f-f47b-476a-96f0-b5f5c313cb61-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8657ca8f-f47b-476a-96f0-b5f5c313cb61\") " pod="openstack/nova-cell1-conductor-0" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.464777 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh7jb\" (UniqueName: \"kubernetes.io/projected/8657ca8f-f47b-476a-96f0-b5f5c313cb61-kube-api-access-vh7jb\") pod \"nova-cell1-conductor-0\" (UID: \"8657ca8f-f47b-476a-96f0-b5f5c313cb61\") " pod="openstack/nova-cell1-conductor-0" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.492106 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.647121 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.748248 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-combined-ca-bundle\") pod \"88751a34-122e-469a-955d-d91072955b66\" (UID: \"88751a34-122e-469a-955d-d91072955b66\") " Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.758537 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88751a34-122e-469a-955d-d91072955b66" (UID: "88751a34-122e-469a-955d-d91072955b66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:44:53 crc kubenswrapper[4832]: I1002 18:44:53.852101 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88751a34-122e-469a-955d-d91072955b66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:44:54 crc kubenswrapper[4832]: I1002 18:44:54.207445 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 02 18:44:54 crc kubenswrapper[4832]: I1002 18:44:54.404855 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8657ca8f-f47b-476a-96f0-b5f5c313cb61","Type":"ContainerStarted","Data":"6776f46a6190b2bbbfa335d1664808cdf452cff01377516cf3900cf62dff5714"} Oct 02 18:44:54 crc kubenswrapper[4832]: I1002 18:44:54.406739 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0952326-4649-4cb2-b2e3-049c3f13d3f0","Type":"ContainerStarted","Data":"a55e4f7c19e844e121426c1c34929b755300912a1d79e4756f08f8561b868af9"} Oct 02 18:44:54 crc kubenswrapper[4832]: I1002 18:44:54.406803 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0952326-4649-4cb2-b2e3-049c3f13d3f0","Type":"ContainerStarted","Data":"0dd5ed343af17c2d8a164abac8674da859b1193f75fa25764bfce9787045f494"} Oct 02 18:44:54 crc kubenswrapper[4832]: I1002 18:44:54.406817 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0952326-4649-4cb2-b2e3-049c3f13d3f0","Type":"ContainerStarted","Data":"3d1e5056fbba25951d61c52f4d9c7d216d1cb9d93aab9ce76fd06f1cf94d9a54"} Oct 02 18:44:54 crc kubenswrapper[4832]: I1002 18:44:54.432883 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.432857561 podStartE2EDuration="2.432857561s" podCreationTimestamp="2025-10-02 18:44:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:44:54.422689068 +0000 UTC m=+1451.392131930" watchObservedRunningTime="2025-10-02 18:44:54.432857561 +0000 UTC m=+1451.402300433" Oct 02 18:44:55 crc kubenswrapper[4832]: I1002 18:44:55.424343 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8657ca8f-f47b-476a-96f0-b5f5c313cb61","Type":"ContainerStarted","Data":"d747f45e31bd0b7976bbef3b88325cc7fa677095794b6f1a5982cac0cd6909cc"} Oct 02 18:44:55 crc kubenswrapper[4832]: I1002 18:44:55.424875 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 02 18:44:55 crc kubenswrapper[4832]: I1002 18:44:55.451441 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.4514175959999998 podStartE2EDuration="2.451417596s" podCreationTimestamp="2025-10-02 18:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:44:55.443385729 +0000 UTC m=+1452.412828601" watchObservedRunningTime="2025-10-02 18:44:55.451417596 +0000 UTC m=+1452.420860478" Oct 02 18:44:56 crc kubenswrapper[4832]: I1002 18:44:56.710083 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 02 18:44:56 crc kubenswrapper[4832]: I1002 18:44:56.710382 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 02 18:44:56 crc kubenswrapper[4832]: I1002 18:44:56.728290 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 02 18:44:56 crc kubenswrapper[4832]: I1002 18:44:56.875872 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:44:56 crc kubenswrapper[4832]: I1002 18:44:56.875941 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.195626 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh"] Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.198849 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.202657 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.204731 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.221899 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh"] Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.270822 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/499818db-8995-4563-9226-7ed704208bc6-config-volume\") pod \"collect-profiles-29323845-k47mh\" (UID: \"499818db-8995-4563-9226-7ed704208bc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.270927 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/499818db-8995-4563-9226-7ed704208bc6-secret-volume\") pod \"collect-profiles-29323845-k47mh\" (UID: \"499818db-8995-4563-9226-7ed704208bc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.271081 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zcfc\" (UniqueName: \"kubernetes.io/projected/499818db-8995-4563-9226-7ed704208bc6-kube-api-access-6zcfc\") pod \"collect-profiles-29323845-k47mh\" (UID: \"499818db-8995-4563-9226-7ed704208bc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.372948 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/499818db-8995-4563-9226-7ed704208bc6-config-volume\") pod \"collect-profiles-29323845-k47mh\" (UID: \"499818db-8995-4563-9226-7ed704208bc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.373038 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/499818db-8995-4563-9226-7ed704208bc6-secret-volume\") pod \"collect-profiles-29323845-k47mh\" (UID: \"499818db-8995-4563-9226-7ed704208bc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.373159 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zcfc\" (UniqueName: \"kubernetes.io/projected/499818db-8995-4563-9226-7ed704208bc6-kube-api-access-6zcfc\") pod \"collect-profiles-29323845-k47mh\" (UID: \"499818db-8995-4563-9226-7ed704208bc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.373868 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/499818db-8995-4563-9226-7ed704208bc6-config-volume\") pod \"collect-profiles-29323845-k47mh\" (UID: \"499818db-8995-4563-9226-7ed704208bc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.378697 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/499818db-8995-4563-9226-7ed704208bc6-secret-volume\") pod \"collect-profiles-29323845-k47mh\" (UID: \"499818db-8995-4563-9226-7ed704208bc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.390232 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zcfc\" (UniqueName: \"kubernetes.io/projected/499818db-8995-4563-9226-7ed704208bc6-kube-api-access-6zcfc\") pod \"collect-profiles-29323845-k47mh\" (UID: \"499818db-8995-4563-9226-7ed704208bc6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" Oct 02 18:45:00 crc kubenswrapper[4832]: I1002 18:45:00.537423 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" Oct 02 18:45:01 crc kubenswrapper[4832]: I1002 18:45:01.077561 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh"] Oct 02 18:45:01 crc kubenswrapper[4832]: I1002 18:45:01.504874 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" event={"ID":"499818db-8995-4563-9226-7ed704208bc6","Type":"ContainerStarted","Data":"69d8627734ffa6f4e67c2ff248aba21562db8a9ee8711d357d180c9e3ab38850"} Oct 02 18:45:01 crc kubenswrapper[4832]: I1002 18:45:01.505129 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" event={"ID":"499818db-8995-4563-9226-7ed704208bc6","Type":"ContainerStarted","Data":"6837f8ff41a385871ff985abc4a4caf66efc9df99e54a57ba9e205b90291fa1e"} Oct 02 18:45:01 crc kubenswrapper[4832]: I1002 18:45:01.525994 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" podStartSLOduration=1.5259762289999999 podStartE2EDuration="1.525976229s" podCreationTimestamp="2025-10-02 18:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:45:01.521559843 +0000 UTC m=+1458.491002715" watchObservedRunningTime="2025-10-02 18:45:01.525976229 +0000 UTC m=+1458.495419101" Oct 02 18:45:01 crc kubenswrapper[4832]: I1002 18:45:01.709739 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 02 18:45:01 crc kubenswrapper[4832]: I1002 18:45:01.709795 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 02 18:45:01 crc kubenswrapper[4832]: I1002 18:45:01.728322 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 02 18:45:01 crc kubenswrapper[4832]: I1002 18:45:01.761658 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 02 18:45:02 crc kubenswrapper[4832]: I1002 18:45:02.522120 4832 generic.go:334] "Generic (PLEG): container finished" podID="499818db-8995-4563-9226-7ed704208bc6" containerID="69d8627734ffa6f4e67c2ff248aba21562db8a9ee8711d357d180c9e3ab38850" exitCode=0 Oct 02 18:45:02 crc kubenswrapper[4832]: I1002 18:45:02.522224 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" event={"ID":"499818db-8995-4563-9226-7ed704208bc6","Type":"ContainerDied","Data":"69d8627734ffa6f4e67c2ff248aba21562db8a9ee8711d357d180c9e3ab38850"} Oct 02 18:45:02 crc kubenswrapper[4832]: I1002 18:45:02.580926 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 02 18:45:02 crc kubenswrapper[4832]: I1002 18:45:02.726549 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.243:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 02 18:45:02 crc kubenswrapper[4832]: I1002 18:45:02.726558 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.243:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 02 18:45:02 crc kubenswrapper[4832]: I1002 18:45:02.957864 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 02 18:45:02 crc kubenswrapper[4832]: I1002 18:45:02.957919 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 02 18:45:03 crc kubenswrapper[4832]: I1002 18:45:03.689163 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.039585 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d0952326-4649-4cb2-b2e3-049c3f13d3f0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.245:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.039620 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d0952326-4649-4cb2-b2e3-049c3f13d3f0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.245:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.177717 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.299323 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zcfc\" (UniqueName: \"kubernetes.io/projected/499818db-8995-4563-9226-7ed704208bc6-kube-api-access-6zcfc\") pod \"499818db-8995-4563-9226-7ed704208bc6\" (UID: \"499818db-8995-4563-9226-7ed704208bc6\") " Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.299444 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/499818db-8995-4563-9226-7ed704208bc6-secret-volume\") pod \"499818db-8995-4563-9226-7ed704208bc6\" (UID: \"499818db-8995-4563-9226-7ed704208bc6\") " Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.299554 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/499818db-8995-4563-9226-7ed704208bc6-config-volume\") pod \"499818db-8995-4563-9226-7ed704208bc6\" (UID: \"499818db-8995-4563-9226-7ed704208bc6\") " Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.300148 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/499818db-8995-4563-9226-7ed704208bc6-config-volume" (OuterVolumeSpecName: "config-volume") pod "499818db-8995-4563-9226-7ed704208bc6" (UID: "499818db-8995-4563-9226-7ed704208bc6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.301279 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/499818db-8995-4563-9226-7ed704208bc6-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.309417 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/499818db-8995-4563-9226-7ed704208bc6-kube-api-access-6zcfc" (OuterVolumeSpecName: "kube-api-access-6zcfc") pod "499818db-8995-4563-9226-7ed704208bc6" (UID: "499818db-8995-4563-9226-7ed704208bc6"). InnerVolumeSpecName "kube-api-access-6zcfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.316890 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/499818db-8995-4563-9226-7ed704208bc6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "499818db-8995-4563-9226-7ed704208bc6" (UID: "499818db-8995-4563-9226-7ed704208bc6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.404354 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zcfc\" (UniqueName: \"kubernetes.io/projected/499818db-8995-4563-9226-7ed704208bc6-kube-api-access-6zcfc\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.404409 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/499818db-8995-4563-9226-7ed704208bc6-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.547745 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" event={"ID":"499818db-8995-4563-9226-7ed704208bc6","Type":"ContainerDied","Data":"6837f8ff41a385871ff985abc4a4caf66efc9df99e54a57ba9e205b90291fa1e"} Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.547787 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6837f8ff41a385871ff985abc4a4caf66efc9df99e54a57ba9e205b90291fa1e" Oct 02 18:45:04 crc kubenswrapper[4832]: I1002 18:45:04.547859 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh" Oct 02 18:45:11 crc kubenswrapper[4832]: I1002 18:45:11.715327 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 02 18:45:11 crc kubenswrapper[4832]: I1002 18:45:11.720120 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 02 18:45:11 crc kubenswrapper[4832]: I1002 18:45:11.721978 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 02 18:45:12 crc kubenswrapper[4832]: I1002 18:45:12.822069 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 02 18:45:12 crc kubenswrapper[4832]: I1002 18:45:12.961221 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 02 18:45:12 crc kubenswrapper[4832]: I1002 18:45:12.961312 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 02 18:45:12 crc kubenswrapper[4832]: I1002 18:45:12.961861 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 02 18:45:12 crc kubenswrapper[4832]: I1002 18:45:12.961910 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 02 18:45:12 crc kubenswrapper[4832]: I1002 18:45:12.964062 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 02 18:45:12 crc kubenswrapper[4832]: I1002 18:45:12.964619 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.164744 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-fmdft"] Oct 02 18:45:13 crc kubenswrapper[4832]: E1002 18:45:13.165407 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499818db-8995-4563-9226-7ed704208bc6" containerName="collect-profiles" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.165430 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="499818db-8995-4563-9226-7ed704208bc6" containerName="collect-profiles" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.165689 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="499818db-8995-4563-9226-7ed704208bc6" containerName="collect-profiles" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.173497 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.189025 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-fmdft"] Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.309993 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.310036 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.310097 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.310590 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.310681 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-config\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.310798 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tbx4\" (UniqueName: \"kubernetes.io/projected/36bfe800-0313-487f-a2ba-bef9b88ff8c7-kube-api-access-4tbx4\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.413043 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.413113 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-config\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.413184 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tbx4\" (UniqueName: \"kubernetes.io/projected/36bfe800-0313-487f-a2ba-bef9b88ff8c7-kube-api-access-4tbx4\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.413230 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.413248 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.413306 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.414004 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.414122 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-config\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.414202 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.414356 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.415213 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.439473 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tbx4\" (UniqueName: \"kubernetes.io/projected/36bfe800-0313-487f-a2ba-bef9b88ff8c7-kube-api-access-4tbx4\") pod \"dnsmasq-dns-6b7bbf7cf9-fmdft\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: I1002 18:45:13.492074 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:13 crc kubenswrapper[4832]: W1002 18:45:13.937559 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod071fea74_b76f_4aa3_b6da_77c876fb1234.slice/crio-c5ad4337b35e2d469a9ee8f0effcfe5dbccc2887707ab66887ed3834f531a8d5 WatchSource:0}: Error finding container c5ad4337b35e2d469a9ee8f0effcfe5dbccc2887707ab66887ed3834f531a8d5: Status 404 returned error can't find the container with id c5ad4337b35e2d469a9ee8f0effcfe5dbccc2887707ab66887ed3834f531a8d5 Oct 02 18:45:13 crc kubenswrapper[4832]: W1002 18:45:13.938921 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod071fea74_b76f_4aa3_b6da_77c876fb1234.slice/crio-08654b845118ee7820e068e73403edd6cd2e1f9fb8f2ad1ab70831dea6af08e7.scope WatchSource:0}: Error finding container 08654b845118ee7820e068e73403edd6cd2e1f9fb8f2ad1ab70831dea6af08e7: Status 404 returned error can't find the container with id 08654b845118ee7820e068e73403edd6cd2e1f9fb8f2ad1ab70831dea6af08e7 Oct 02 18:45:13 crc kubenswrapper[4832]: W1002 18:45:13.964368 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod071fea74_b76f_4aa3_b6da_77c876fb1234.slice/crio-139346285b041ba1f394b0eb862b7270f7a5102ee952f988bc955c9e9ca798ad.scope WatchSource:0}: Error finding container 139346285b041ba1f394b0eb862b7270f7a5102ee952f988bc955c9e9ca798ad: Status 404 returned error can't find the container with id 139346285b041ba1f394b0eb862b7270f7a5102ee952f988bc955c9e9ca798ad Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.030672 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-fmdft"] Oct 02 18:45:14 crc kubenswrapper[4832]: W1002 18:45:14.131662 4832 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod499818db_8995_4563_9226_7ed704208bc6.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod499818db_8995_4563_9226_7ed704208bc6.slice: no such file or directory Oct 02 18:45:14 crc kubenswrapper[4832]: E1002 18:45:14.189702 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e796fa8_fecd_474b_981e_61af334beee4.slice/crio-6ec59b427079cae51550ef1d171d721d3e5aad1fd0a9606a114808a2962647f8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-conmon-5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e796fa8_fecd_474b_981e_61af334beee4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-1250d72a3a090ea49871efb5919c7ab31781fe1a5d522735d6c1f0ccc2c48007\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88751a34_122e_469a_955d_d91072955b66.slice/crio-conmon-e19f98130721a44029df32a8019885e9a7a6092e0ba9be01c2aa810338882db8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecd658b9_1c22_4778_afde_b392155b499a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-conmon-58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88751a34_122e_469a_955d_d91072955b66.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce7aa817_b0e0_44b8_afb5_bf3a3e6e362c.slice/crio-9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded3933a2_ea03_4354_bfa4_1ec240e12c9d.slice/crio-c79bbbc54e078b0ddc724d0a00073fa4dfee393097bb73e500a987eaf53350a7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-conmon-386860b1230a30fecc03bcb0b6fad63d700dd70a271f52fd5082f99499c4c9fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded3933a2_ea03_4354_bfa4_1ec240e12c9d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-386860b1230a30fecc03bcb0b6fad63d700dd70a271f52fd5082f99499c4c9fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88751a34_122e_469a_955d_d91072955b66.slice/crio-5e38f70d423123e3f08667789ee3ffde916ddf05db59ebb4371ef66e751ff326\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88751a34_122e_469a_955d_d91072955b66.slice/crio-e19f98130721a44029df32a8019885e9a7a6092e0ba9be01c2aa810338882db8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55c9826_fe7e_4a17_800c_6e45446af3a2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-16c0ab1b19c273dbce87502b663e0d495bd5210b1edcb021665d3a896fb75e10.scope\": RecentStats: unable to find data in memory cache]" Oct 02 18:45:14 crc kubenswrapper[4832]: E1002 18:45:14.192217 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55c9826_fe7e_4a17_800c_6e45446af3a2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e796fa8_fecd_474b_981e_61af334beee4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e796fa8_fecd_474b_981e_61af334beee4.slice/crio-6ec59b427079cae51550ef1d171d721d3e5aad1fd0a9606a114808a2962647f8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice/crio-bb47efe3239d62553afdab9e6369edcf08953eded46b79974a0902b50aea755c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88751a34_122e_469a_955d_d91072955b66.slice/crio-conmon-e19f98130721a44029df32a8019885e9a7a6092e0ba9be01c2aa810338882db8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice/crio-d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-386860b1230a30fecc03bcb0b6fad63d700dd70a271f52fd5082f99499c4c9fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-16c0ab1b19c273dbce87502b663e0d495bd5210b1edcb021665d3a896fb75e10.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-conmon-16c0ab1b19c273dbce87502b663e0d495bd5210b1edcb021665d3a896fb75e10.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88751a34_122e_469a_955d_d91072955b66.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice/crio-conmon-d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecd658b9_1c22_4778_afde_b392155b499a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded3933a2_ea03_4354_bfa4_1ec240e12c9d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88751a34_122e_469a_955d_d91072955b66.slice/crio-5e38f70d423123e3f08667789ee3ffde916ddf05db59ebb4371ef66e751ff326\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-1250d72a3a090ea49871efb5919c7ab31781fe1a5d522735d6c1f0ccc2c48007\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce7aa817_b0e0_44b8_afb5_bf3a3e6e362c.slice/crio-9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-conmon-58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded3933a2_ea03_4354_bfa4_1ec240e12c9d.slice/crio-c79bbbc54e078b0ddc724d0a00073fa4dfee393097bb73e500a987eaf53350a7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-conmon-5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecd658b9_1c22_4778_afde_b392155b499a.slice/crio-2d09df4e608caaaf5f301323a3b61f91707a623db14f880568ef03f3dbb5d348\": RecentStats: unable to find data in memory cache]" Oct 02 18:45:14 crc kubenswrapper[4832]: E1002 18:45:14.200144 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded3933a2_ea03_4354_bfa4_1ec240e12c9d.slice/crio-c79bbbc54e078b0ddc724d0a00073fa4dfee393097bb73e500a987eaf53350a7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e796fa8_fecd_474b_981e_61af334beee4.slice/crio-6ec59b427079cae51550ef1d171d721d3e5aad1fd0a9606a114808a2962647f8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88751a34_122e_469a_955d_d91072955b66.slice/crio-e19f98130721a44029df32a8019885e9a7a6092e0ba9be01c2aa810338882db8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-386860b1230a30fecc03bcb0b6fad63d700dd70a271f52fd5082f99499c4c9fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e796fa8_fecd_474b_981e_61af334beee4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55c9826_fe7e_4a17_800c_6e45446af3a2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecd658b9_1c22_4778_afde_b392155b499a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88751a34_122e_469a_955d_d91072955b66.slice/crio-conmon-e19f98130721a44029df32a8019885e9a7a6092e0ba9be01c2aa810338882db8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-conmon-5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-conmon-58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-16c0ab1b19c273dbce87502b663e0d495bd5210b1edcb021665d3a896fb75e10.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88751a34_122e_469a_955d_d91072955b66.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod071fea74_b76f_4aa3_b6da_77c876fb1234.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-conmon-386860b1230a30fecc03bcb0b6fad63d700dd70a271f52fd5082f99499c4c9fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecd658b9_1c22_4778_afde_b392155b499a.slice/crio-2d09df4e608caaaf5f301323a3b61f91707a623db14f880568ef03f3dbb5d348\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-conmon-16c0ab1b19c273dbce87502b663e0d495bd5210b1edcb021665d3a896fb75e10.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice/crio-d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c.scope\": RecentStats: unable to find data in memory cache]" Oct 02 18:45:14 crc kubenswrapper[4832]: E1002 18:45:14.201687 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-1250d72a3a090ea49871efb5919c7ab31781fe1a5d522735d6c1f0ccc2c48007\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88751a34_122e_469a_955d_d91072955b66.slice/crio-conmon-e19f98130721a44029df32a8019885e9a7a6092e0ba9be01c2aa810338882db8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice/crio-bb47efe3239d62553afdab9e6369edcf08953eded46b79974a0902b50aea755c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecd658b9_1c22_4778_afde_b392155b499a.slice/crio-2d09df4e608caaaf5f301323a3b61f91707a623db14f880568ef03f3dbb5d348\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-16c0ab1b19c273dbce87502b663e0d495bd5210b1edcb021665d3a896fb75e10.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-conmon-58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-386860b1230a30fecc03bcb0b6fad63d700dd70a271f52fd5082f99499c4c9fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55c9826_fe7e_4a17_800c_6e45446af3a2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-conmon-5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-conmon-16c0ab1b19c273dbce87502b663e0d495bd5210b1edcb021665d3a896fb75e10.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce7aa817_b0e0_44b8_afb5_bf3a3e6e362c.slice/crio-conmon-9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-conmon-386860b1230a30fecc03bcb0b6fad63d700dd70a271f52fd5082f99499c4c9fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e796fa8_fecd_474b_981e_61af334beee4.slice/crio-6ec59b427079cae51550ef1d171d721d3e5aad1fd0a9606a114808a2962647f8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice/crio-d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecd658b9_1c22_4778_afde_b392155b499a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice/crio-conmon-d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88751a34_122e_469a_955d_d91072955b66.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce7aa817_b0e0_44b8_afb5_bf3a3e6e362c.slice/crio-9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded3933a2_ea03_4354_bfa4_1ec240e12c9d.slice\": RecentStats: unable to find data in memory cache]" Oct 02 18:45:14 crc kubenswrapper[4832]: E1002 18:45:14.231062 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-conmon-16c0ab1b19c273dbce87502b663e0d495bd5210b1edcb021665d3a896fb75e10.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-conmon-386860b1230a30fecc03bcb0b6fad63d700dd70a271f52fd5082f99499c4c9fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88751a34_122e_469a_955d_d91072955b66.slice/crio-conmon-e19f98130721a44029df32a8019885e9a7a6092e0ba9be01c2aa810338882db8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-16c0ab1b19c273dbce87502b663e0d495bd5210b1edcb021665d3a896fb75e10.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-conmon-5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce7aa817_b0e0_44b8_afb5_bf3a3e6e362c.slice/crio-9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-conmon-2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod071fea74_b76f_4aa3_b6da_77c876fb1234.slice/crio-conmon-08654b845118ee7820e068e73403edd6cd2e1f9fb8f2ad1ab70831dea6af08e7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-conmon-58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice/crio-d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55c9826_fe7e_4a17_800c_6e45446af3a2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-386860b1230a30fecc03bcb0b6fad63d700dd70a271f52fd5082f99499c4c9fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1d8845_2d10_4a17_b10a_44ea05f9671d.slice/crio-1250d72a3a090ea49871efb5919c7ab31781fe1a5d522735d6c1f0ccc2c48007\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88751a34_122e_469a_955d_d91072955b66.slice/crio-5e38f70d423123e3f08667789ee3ffde916ddf05db59ebb4371ef66e751ff326\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce7aa817_b0e0_44b8_afb5_bf3a3e6e362c.slice/crio-conmon-9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d861870_773c_4994_b599_62adec02a99a.slice/crio-5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod071fea74_b76f_4aa3_b6da_77c876fb1234.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice/crio-bb47efe3239d62553afdab9e6369edcf08953eded46b79974a0902b50aea755c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice/crio-conmon-d02a3275ddcf43890536ebf9479c38c3344c4ab371048cb53440a13bd007934c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0847b290_25a2_406a_8e3c_31952edbd846.slice\": RecentStats: unable to find data in memory cache]" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.374448 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.555601 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-combined-ca-bundle\") pod \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\" (UID: \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\") " Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.556513 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtz96\" (UniqueName: \"kubernetes.io/projected/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-kube-api-access-vtz96\") pod \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\" (UID: \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\") " Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.556573 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-config-data\") pod \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\" (UID: \"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c\") " Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.566448 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-kube-api-access-vtz96" (OuterVolumeSpecName: "kube-api-access-vtz96") pod "ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c" (UID: "ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c"). InnerVolumeSpecName "kube-api-access-vtz96". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.614041 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c" (UID: "ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.668668 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.668705 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtz96\" (UniqueName: \"kubernetes.io/projected/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-kube-api-access-vtz96\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.672844 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-config-data" (OuterVolumeSpecName: "config-data") pod "ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c" (UID: "ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.743619 4832 generic.go:334] "Generic (PLEG): container finished" podID="ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c" containerID="9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5" exitCode=137 Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.743704 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c","Type":"ContainerDied","Data":"9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5"} Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.743735 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c","Type":"ContainerDied","Data":"c2324dad69118b50d160ca2f6f824c545223b8aa42495f2f489f9a03f710c42d"} Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.743756 4832 scope.go:117] "RemoveContainer" containerID="9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.743911 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.755097 4832 generic.go:334] "Generic (PLEG): container finished" podID="36bfe800-0313-487f-a2ba-bef9b88ff8c7" containerID="401e40ce60292402ec5076a3a607af29fd727972156c77bbbf6d5ad5d8f40e4e" exitCode=0 Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.755546 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" event={"ID":"36bfe800-0313-487f-a2ba-bef9b88ff8c7","Type":"ContainerDied","Data":"401e40ce60292402ec5076a3a607af29fd727972156c77bbbf6d5ad5d8f40e4e"} Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.755623 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" event={"ID":"36bfe800-0313-487f-a2ba-bef9b88ff8c7","Type":"ContainerStarted","Data":"368026965f5302958ce1d7a5c8142f6966675b523120b97aeaa424c022c6f01b"} Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.771316 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.897130 4832 scope.go:117] "RemoveContainer" containerID="9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.909362 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8c2sx"] Oct 02 18:45:14 crc kubenswrapper[4832]: E1002 18:45:14.910667 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c" containerName="nova-cell1-novncproxy-novncproxy" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.910685 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c" containerName="nova-cell1-novncproxy-novncproxy" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.911146 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c" containerName="nova-cell1-novncproxy-novncproxy" Oct 02 18:45:14 crc kubenswrapper[4832]: E1002 18:45:14.916746 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5\": container with ID starting with 9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5 not found: ID does not exist" containerID="9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.916822 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5"} err="failed to get container status \"9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5\": rpc error: code = NotFound desc = could not find container \"9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5\": container with ID starting with 9b9024d87f7d38ad9f6ce66a220a7e83d081d74f64c41a673423f9cd0b730de5 not found: ID does not exist" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.922014 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:45:14 crc kubenswrapper[4832]: I1002 18:45:14.934442 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8c2sx"] Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.034392 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.049565 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.068188 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.077594 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.082712 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.083108 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.083912 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.086912 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.111842 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cj44\" (UniqueName: \"kubernetes.io/projected/2b2c05ec-ad79-43c8-8b11-4406770b8875-kube-api-access-9cj44\") pod \"redhat-operators-8c2sx\" (UID: \"2b2c05ec-ad79-43c8-8b11-4406770b8875\") " pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.111902 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b2c05ec-ad79-43c8-8b11-4406770b8875-utilities\") pod \"redhat-operators-8c2sx\" (UID: \"2b2c05ec-ad79-43c8-8b11-4406770b8875\") " pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.112008 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b2c05ec-ad79-43c8-8b11-4406770b8875-catalog-content\") pod \"redhat-operators-8c2sx\" (UID: \"2b2c05ec-ad79-43c8-8b11-4406770b8875\") " pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.213908 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m275n\" (UniqueName: \"kubernetes.io/projected/ce0ea362-776c-4b12-b3b6-9f684521d40f-kube-api-access-m275n\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.213961 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cj44\" (UniqueName: \"kubernetes.io/projected/2b2c05ec-ad79-43c8-8b11-4406770b8875-kube-api-access-9cj44\") pod \"redhat-operators-8c2sx\" (UID: \"2b2c05ec-ad79-43c8-8b11-4406770b8875\") " pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.213996 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b2c05ec-ad79-43c8-8b11-4406770b8875-utilities\") pod \"redhat-operators-8c2sx\" (UID: \"2b2c05ec-ad79-43c8-8b11-4406770b8875\") " pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.214045 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0ea362-776c-4b12-b3b6-9f684521d40f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.214076 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b2c05ec-ad79-43c8-8b11-4406770b8875-catalog-content\") pod \"redhat-operators-8c2sx\" (UID: \"2b2c05ec-ad79-43c8-8b11-4406770b8875\") " pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.214171 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0ea362-776c-4b12-b3b6-9f684521d40f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.214204 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0ea362-776c-4b12-b3b6-9f684521d40f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.214285 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0ea362-776c-4b12-b3b6-9f684521d40f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.215168 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b2c05ec-ad79-43c8-8b11-4406770b8875-utilities\") pod \"redhat-operators-8c2sx\" (UID: \"2b2c05ec-ad79-43c8-8b11-4406770b8875\") " pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.215302 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b2c05ec-ad79-43c8-8b11-4406770b8875-catalog-content\") pod \"redhat-operators-8c2sx\" (UID: \"2b2c05ec-ad79-43c8-8b11-4406770b8875\") " pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.238741 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cj44\" (UniqueName: \"kubernetes.io/projected/2b2c05ec-ad79-43c8-8b11-4406770b8875-kube-api-access-9cj44\") pod \"redhat-operators-8c2sx\" (UID: \"2b2c05ec-ad79-43c8-8b11-4406770b8875\") " pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.246316 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c" path="/var/lib/kubelet/pods/ce7aa817-b0e0-44b8-afb5-bf3a3e6e362c/volumes" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.262962 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.316558 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0ea362-776c-4b12-b3b6-9f684521d40f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.316818 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0ea362-776c-4b12-b3b6-9f684521d40f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.316908 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0ea362-776c-4b12-b3b6-9f684521d40f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.316961 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m275n\" (UniqueName: \"kubernetes.io/projected/ce0ea362-776c-4b12-b3b6-9f684521d40f-kube-api-access-m275n\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.317018 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0ea362-776c-4b12-b3b6-9f684521d40f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.320817 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0ea362-776c-4b12-b3b6-9f684521d40f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.322066 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce0ea362-776c-4b12-b3b6-9f684521d40f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.325180 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce0ea362-776c-4b12-b3b6-9f684521d40f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.335792 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce0ea362-776c-4b12-b3b6-9f684521d40f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.342954 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m275n\" (UniqueName: \"kubernetes.io/projected/ce0ea362-776c-4b12-b3b6-9f684521d40f-kube-api-access-m275n\") pod \"nova-cell1-novncproxy-0\" (UID: \"ce0ea362-776c-4b12-b3b6-9f684521d40f\") " pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.419603 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.774882 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" event={"ID":"36bfe800-0313-487f-a2ba-bef9b88ff8c7","Type":"ContainerStarted","Data":"b345ba202167c38f50777bf5322b833e6807b2109a7e1de77cfed6f0213ae877"} Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.775470 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.830644 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" podStartSLOduration=2.830627082 podStartE2EDuration="2.830627082s" podCreationTimestamp="2025-10-02 18:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:45:15.812971059 +0000 UTC m=+1472.782413931" watchObservedRunningTime="2025-10-02 18:45:15.830627082 +0000 UTC m=+1472.800069954" Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.981977 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.982301 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d0952326-4649-4cb2-b2e3-049c3f13d3f0" containerName="nova-api-log" containerID="cri-o://0dd5ed343af17c2d8a164abac8674da859b1193f75fa25764bfce9787045f494" gracePeriod=30 Oct 02 18:45:15 crc kubenswrapper[4832]: I1002 18:45:15.982439 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d0952326-4649-4cb2-b2e3-049c3f13d3f0" containerName="nova-api-api" containerID="cri-o://a55e4f7c19e844e121426c1c34929b755300912a1d79e4756f08f8561b868af9" gracePeriod=30 Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.065145 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8c2sx"] Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.150985 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.648620 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.649179 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="ceilometer-central-agent" containerID="cri-o://ce45f741a7d3233f464adb945e2cba071eb3292ad3725582be9b3b052f6851e6" gracePeriod=30 Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.649238 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="proxy-httpd" containerID="cri-o://e8eeba79cf2cc3a8aef35105c8b1a6bc746518c179a315728eb5d769d31d5a14" gracePeriod=30 Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.649323 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="sg-core" containerID="cri-o://c6f89450ccd21aa3ebae9792253c06ddf87a1d4b5b8d9c7664404ce48c8f5d97" gracePeriod=30 Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.649405 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="ceilometer-notification-agent" containerID="cri-o://20ec86f1e91485f1559eef17a29a1560b3c59d5d0b95e05e6dfc883acebcc2f9" gracePeriod=30 Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.661040 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.242:3000/\": EOF" Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.826189 4832 generic.go:334] "Generic (PLEG): container finished" podID="d0952326-4649-4cb2-b2e3-049c3f13d3f0" containerID="0dd5ed343af17c2d8a164abac8674da859b1193f75fa25764bfce9787045f494" exitCode=143 Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.826367 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0952326-4649-4cb2-b2e3-049c3f13d3f0","Type":"ContainerDied","Data":"0dd5ed343af17c2d8a164abac8674da859b1193f75fa25764bfce9787045f494"} Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.833572 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce0ea362-776c-4b12-b3b6-9f684521d40f","Type":"ContainerStarted","Data":"5c7cdf5031e37286832666e5f7bbad1a33c5ff728e85ff337d9784dfbb9f7a0c"} Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.833629 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ce0ea362-776c-4b12-b3b6-9f684521d40f","Type":"ContainerStarted","Data":"0058a9816e5323415a761cc691917eeb9719b350e556d389f747c24ed1cd4ab3"} Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.847453 4832 generic.go:334] "Generic (PLEG): container finished" podID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerID="c6f89450ccd21aa3ebae9792253c06ddf87a1d4b5b8d9c7664404ce48c8f5d97" exitCode=2 Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.847546 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3834871-a53a-40ca-8cae-a908ebe9908b","Type":"ContainerDied","Data":"c6f89450ccd21aa3ebae9792253c06ddf87a1d4b5b8d9c7664404ce48c8f5d97"} Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.852024 4832 generic.go:334] "Generic (PLEG): container finished" podID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerID="2dbf4e9755a08959e70d8e66135fe735a6e901a3a1a2b06d34ad930f7f984797" exitCode=0 Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.852538 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c2sx" event={"ID":"2b2c05ec-ad79-43c8-8b11-4406770b8875","Type":"ContainerDied","Data":"2dbf4e9755a08959e70d8e66135fe735a6e901a3a1a2b06d34ad930f7f984797"} Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.852601 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c2sx" event={"ID":"2b2c05ec-ad79-43c8-8b11-4406770b8875","Type":"ContainerStarted","Data":"01bbc07aa916d5da2c5fb3b1ffaa88fa2120de93bdcbbae4ed3e02f2976885c6"} Oct 02 18:45:16 crc kubenswrapper[4832]: I1002 18:45:16.859099 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.8590786230000003 podStartE2EDuration="2.859078623s" podCreationTimestamp="2025-10-02 18:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:45:16.85280977 +0000 UTC m=+1473.822252642" watchObservedRunningTime="2025-10-02 18:45:16.859078623 +0000 UTC m=+1473.828521495" Oct 02 18:45:17 crc kubenswrapper[4832]: I1002 18:45:17.423895 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.242:3000/\": dial tcp 10.217.0.242:3000: connect: connection refused" Oct 02 18:45:17 crc kubenswrapper[4832]: I1002 18:45:17.865193 4832 generic.go:334] "Generic (PLEG): container finished" podID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerID="e8eeba79cf2cc3a8aef35105c8b1a6bc746518c179a315728eb5d769d31d5a14" exitCode=0 Oct 02 18:45:17 crc kubenswrapper[4832]: I1002 18:45:17.865451 4832 generic.go:334] "Generic (PLEG): container finished" podID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerID="ce45f741a7d3233f464adb945e2cba071eb3292ad3725582be9b3b052f6851e6" exitCode=0 Oct 02 18:45:17 crc kubenswrapper[4832]: I1002 18:45:17.865235 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3834871-a53a-40ca-8cae-a908ebe9908b","Type":"ContainerDied","Data":"e8eeba79cf2cc3a8aef35105c8b1a6bc746518c179a315728eb5d769d31d5a14"} Oct 02 18:45:17 crc kubenswrapper[4832]: I1002 18:45:17.865508 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3834871-a53a-40ca-8cae-a908ebe9908b","Type":"ContainerDied","Data":"ce45f741a7d3233f464adb945e2cba071eb3292ad3725582be9b3b052f6851e6"} Oct 02 18:45:18 crc kubenswrapper[4832]: I1002 18:45:18.883789 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c2sx" event={"ID":"2b2c05ec-ad79-43c8-8b11-4406770b8875","Type":"ContainerStarted","Data":"605591607d90c730b86bbb92bddc817fb6766826c7ad9299f5dbf2a1470a035b"} Oct 02 18:45:19 crc kubenswrapper[4832]: I1002 18:45:19.925480 4832 generic.go:334] "Generic (PLEG): container finished" podID="d0952326-4649-4cb2-b2e3-049c3f13d3f0" containerID="a55e4f7c19e844e121426c1c34929b755300912a1d79e4756f08f8561b868af9" exitCode=0 Oct 02 18:45:19 crc kubenswrapper[4832]: I1002 18:45:19.925684 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0952326-4649-4cb2-b2e3-049c3f13d3f0","Type":"ContainerDied","Data":"a55e4f7c19e844e121426c1c34929b755300912a1d79e4756f08f8561b868af9"} Oct 02 18:45:19 crc kubenswrapper[4832]: I1002 18:45:19.925750 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0952326-4649-4cb2-b2e3-049c3f13d3f0","Type":"ContainerDied","Data":"3d1e5056fbba25951d61c52f4d9c7d216d1cb9d93aab9ce76fd06f1cf94d9a54"} Oct 02 18:45:19 crc kubenswrapper[4832]: I1002 18:45:19.925769 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d1e5056fbba25951d61c52f4d9c7d216d1cb9d93aab9ce76fd06f1cf94d9a54" Oct 02 18:45:19 crc kubenswrapper[4832]: I1002 18:45:19.926559 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.097586 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0952326-4649-4cb2-b2e3-049c3f13d3f0-combined-ca-bundle\") pod \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.097862 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0952326-4649-4cb2-b2e3-049c3f13d3f0-config-data\") pod \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.097895 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0952326-4649-4cb2-b2e3-049c3f13d3f0-logs\") pod \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.097984 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl7xf\" (UniqueName: \"kubernetes.io/projected/d0952326-4649-4cb2-b2e3-049c3f13d3f0-kube-api-access-nl7xf\") pod \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\" (UID: \"d0952326-4649-4cb2-b2e3-049c3f13d3f0\") " Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.099524 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0952326-4649-4cb2-b2e3-049c3f13d3f0-logs" (OuterVolumeSpecName: "logs") pod "d0952326-4649-4cb2-b2e3-049c3f13d3f0" (UID: "d0952326-4649-4cb2-b2e3-049c3f13d3f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.108589 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0952326-4649-4cb2-b2e3-049c3f13d3f0-kube-api-access-nl7xf" (OuterVolumeSpecName: "kube-api-access-nl7xf") pod "d0952326-4649-4cb2-b2e3-049c3f13d3f0" (UID: "d0952326-4649-4cb2-b2e3-049c3f13d3f0"). InnerVolumeSpecName "kube-api-access-nl7xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.145981 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0952326-4649-4cb2-b2e3-049c3f13d3f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0952326-4649-4cb2-b2e3-049c3f13d3f0" (UID: "d0952326-4649-4cb2-b2e3-049c3f13d3f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.148564 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0952326-4649-4cb2-b2e3-049c3f13d3f0-config-data" (OuterVolumeSpecName: "config-data") pod "d0952326-4649-4cb2-b2e3-049c3f13d3f0" (UID: "d0952326-4649-4cb2-b2e3-049c3f13d3f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.202053 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0952326-4649-4cb2-b2e3-049c3f13d3f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.202464 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0952326-4649-4cb2-b2e3-049c3f13d3f0-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.202530 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0952326-4649-4cb2-b2e3-049c3f13d3f0-logs\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.202601 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl7xf\" (UniqueName: \"kubernetes.io/projected/d0952326-4649-4cb2-b2e3-049c3f13d3f0-kube-api-access-nl7xf\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.420800 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.870144 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.938965 4832 generic.go:334] "Generic (PLEG): container finished" podID="9d861870-773c-4994-b599-62adec02a99a" containerID="46cc5836ad24974c07c7e9af5baae0901ad72a9722c713cb60ba84308923ddff" exitCode=137 Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.939026 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.939042 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d861870-773c-4994-b599-62adec02a99a","Type":"ContainerDied","Data":"46cc5836ad24974c07c7e9af5baae0901ad72a9722c713cb60ba84308923ddff"} Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.939073 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.939084 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d861870-773c-4994-b599-62adec02a99a","Type":"ContainerDied","Data":"d8974936518950d561faf59616c897cb0104fe3a253348dd0dc4c85439254cba"} Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.939116 4832 scope.go:117] "RemoveContainer" containerID="46cc5836ad24974c07c7e9af5baae0901ad72a9722c713cb60ba84308923ddff" Oct 02 18:45:20 crc kubenswrapper[4832]: I1002 18:45:20.968220 4832 scope.go:117] "RemoveContainer" containerID="2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.006315 4832 scope.go:117] "RemoveContainer" containerID="58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.012163 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.029100 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-scripts\") pod \"9d861870-773c-4994-b599-62adec02a99a\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.029404 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ht72\" (UniqueName: \"kubernetes.io/projected/9d861870-773c-4994-b599-62adec02a99a-kube-api-access-6ht72\") pod \"9d861870-773c-4994-b599-62adec02a99a\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.029440 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-combined-ca-bundle\") pod \"9d861870-773c-4994-b599-62adec02a99a\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.029540 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-config-data\") pod \"9d861870-773c-4994-b599-62adec02a99a\" (UID: \"9d861870-773c-4994-b599-62adec02a99a\") " Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.029435 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.039716 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d861870-773c-4994-b599-62adec02a99a-kube-api-access-6ht72" (OuterVolumeSpecName: "kube-api-access-6ht72") pod "9d861870-773c-4994-b599-62adec02a99a" (UID: "9d861870-773c-4994-b599-62adec02a99a"). InnerVolumeSpecName "kube-api-access-6ht72". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.047573 4832 scope.go:117] "RemoveContainer" containerID="5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.056478 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 02 18:45:21 crc kubenswrapper[4832]: E1002 18:45:21.057217 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-listener" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.057238 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-listener" Oct 02 18:45:21 crc kubenswrapper[4832]: E1002 18:45:21.057272 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-notifier" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.057279 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-notifier" Oct 02 18:45:21 crc kubenswrapper[4832]: E1002 18:45:21.057288 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0952326-4649-4cb2-b2e3-049c3f13d3f0" containerName="nova-api-api" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.057294 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0952326-4649-4cb2-b2e3-049c3f13d3f0" containerName="nova-api-api" Oct 02 18:45:21 crc kubenswrapper[4832]: E1002 18:45:21.057302 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-evaluator" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.057309 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-evaluator" Oct 02 18:45:21 crc kubenswrapper[4832]: E1002 18:45:21.057333 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-api" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.057338 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-api" Oct 02 18:45:21 crc kubenswrapper[4832]: E1002 18:45:21.057354 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0952326-4649-4cb2-b2e3-049c3f13d3f0" containerName="nova-api-log" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.057361 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0952326-4649-4cb2-b2e3-049c3f13d3f0" containerName="nova-api-log" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.057580 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0952326-4649-4cb2-b2e3-049c3f13d3f0" containerName="nova-api-api" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.057606 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0952326-4649-4cb2-b2e3-049c3f13d3f0" containerName="nova-api-log" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.057615 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-api" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.057625 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-listener" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.057636 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-evaluator" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.057647 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d861870-773c-4994-b599-62adec02a99a" containerName="aodh-notifier" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.059002 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.061485 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.061882 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.062111 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.070716 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.071474 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-scripts" (OuterVolumeSpecName: "scripts") pod "9d861870-773c-4994-b599-62adec02a99a" (UID: "9d861870-773c-4994-b599-62adec02a99a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.144185 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-public-tls-certs\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.144444 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.144467 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glnr2\" (UniqueName: \"kubernetes.io/projected/57a27d75-be0a-495c-bbef-8cafcd375848-kube-api-access-glnr2\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.144522 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-internal-tls-certs\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.144539 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a27d75-be0a-495c-bbef-8cafcd375848-logs\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.144636 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-config-data\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.144899 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.144916 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ht72\" (UniqueName: \"kubernetes.io/projected/9d861870-773c-4994-b599-62adec02a99a-kube-api-access-6ht72\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.248574 4832 scope.go:117] "RemoveContainer" containerID="46cc5836ad24974c07c7e9af5baae0901ad72a9722c713cb60ba84308923ddff" Oct 02 18:45:21 crc kubenswrapper[4832]: E1002 18:45:21.249032 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46cc5836ad24974c07c7e9af5baae0901ad72a9722c713cb60ba84308923ddff\": container with ID starting with 46cc5836ad24974c07c7e9af5baae0901ad72a9722c713cb60ba84308923ddff not found: ID does not exist" containerID="46cc5836ad24974c07c7e9af5baae0901ad72a9722c713cb60ba84308923ddff" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.249061 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46cc5836ad24974c07c7e9af5baae0901ad72a9722c713cb60ba84308923ddff"} err="failed to get container status \"46cc5836ad24974c07c7e9af5baae0901ad72a9722c713cb60ba84308923ddff\": rpc error: code = NotFound desc = could not find container \"46cc5836ad24974c07c7e9af5baae0901ad72a9722c713cb60ba84308923ddff\": container with ID starting with 46cc5836ad24974c07c7e9af5baae0901ad72a9722c713cb60ba84308923ddff not found: ID does not exist" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.249081 4832 scope.go:117] "RemoveContainer" containerID="2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.249671 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-public-tls-certs\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.249709 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.249734 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glnr2\" (UniqueName: \"kubernetes.io/projected/57a27d75-be0a-495c-bbef-8cafcd375848-kube-api-access-glnr2\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.249777 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-internal-tls-certs\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.249794 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a27d75-be0a-495c-bbef-8cafcd375848-logs\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.249882 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-config-data\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.251131 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a27d75-be0a-495c-bbef-8cafcd375848-logs\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.251755 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0952326-4649-4cb2-b2e3-049c3f13d3f0" path="/var/lib/kubelet/pods/d0952326-4649-4cb2-b2e3-049c3f13d3f0/volumes" Oct 02 18:45:21 crc kubenswrapper[4832]: E1002 18:45:21.258323 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9\": container with ID starting with 2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9 not found: ID does not exist" containerID="2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.258366 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9"} err="failed to get container status \"2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9\": rpc error: code = NotFound desc = could not find container \"2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9\": container with ID starting with 2e1f5e9fcf641d4fe304da7cf8b5b87d3c88bdd566c8d9f8f5b7c005889c1fe9 not found: ID does not exist" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.258396 4832 scope.go:117] "RemoveContainer" containerID="58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba" Oct 02 18:45:21 crc kubenswrapper[4832]: E1002 18:45:21.269700 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba\": container with ID starting with 58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba not found: ID does not exist" containerID="58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.269741 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba"} err="failed to get container status \"58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba\": rpc error: code = NotFound desc = could not find container \"58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba\": container with ID starting with 58f9d3e0c3b246f8a960856ba21a5d6e021b9da06d11d0afa2d2acac44e5deba not found: ID does not exist" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.269766 4832 scope.go:117] "RemoveContainer" containerID="5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26" Oct 02 18:45:21 crc kubenswrapper[4832]: E1002 18:45:21.271860 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26\": container with ID starting with 5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26 not found: ID does not exist" containerID="5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.271886 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26"} err="failed to get container status \"5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26\": rpc error: code = NotFound desc = could not find container \"5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26\": container with ID starting with 5503bf74f5d1a743748a899b9021aad5df13461d0309596460edd5903bb3ec26 not found: ID does not exist" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.273739 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-public-tls-certs\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.286015 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-config-data\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.303946 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.304850 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-internal-tls-certs\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.315843 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glnr2\" (UniqueName: \"kubernetes.io/projected/57a27d75-be0a-495c-bbef-8cafcd375848-kube-api-access-glnr2\") pod \"nova-api-0\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.362406 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-config-data" (OuterVolumeSpecName: "config-data") pod "9d861870-773c-4994-b599-62adec02a99a" (UID: "9d861870-773c-4994-b599-62adec02a99a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.373424 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d861870-773c-4994-b599-62adec02a99a" (UID: "9d861870-773c-4994-b599-62adec02a99a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.455635 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.455663 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d861870-773c-4994-b599-62adec02a99a-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.504384 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.729233 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.777225 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.796564 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.800016 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.804191 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.804403 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.804573 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.804874 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-hgfvc" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.805043 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.816115 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.989662 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-internal-tls-certs\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.990711 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggbv5\" (UniqueName: \"kubernetes.io/projected/726d8b9a-8bc1-4cea-a65f-e494847a6b72-kube-api-access-ggbv5\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.991182 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-combined-ca-bundle\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.991542 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-public-tls-certs\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.992180 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-config-data\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:21 crc kubenswrapper[4832]: I1002 18:45:21.992296 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-scripts\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.094340 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-combined-ca-bundle\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.094668 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-public-tls-certs\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.094772 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-config-data\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.094934 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-scripts\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.095151 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-internal-tls-certs\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.095251 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggbv5\" (UniqueName: \"kubernetes.io/projected/726d8b9a-8bc1-4cea-a65f-e494847a6b72-kube-api-access-ggbv5\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.101778 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-config-data\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.101970 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-combined-ca-bundle\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.102908 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-internal-tls-certs\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.103120 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-public-tls-certs\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.107283 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-scripts\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.116374 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggbv5\" (UniqueName: \"kubernetes.io/projected/726d8b9a-8bc1-4cea-a65f-e494847a6b72-kube-api-access-ggbv5\") pod \"aodh-0\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " pod="openstack/aodh-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.128060 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 02 18:45:22 crc kubenswrapper[4832]: W1002 18:45:22.147130 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57a27d75_be0a_495c_bbef_8cafcd375848.slice/crio-3d6c40641702fbc200089cc3f4353cc05fb2fff0eb5f4933da55566d58a7206c WatchSource:0}: Error finding container 3d6c40641702fbc200089cc3f4353cc05fb2fff0eb5f4933da55566d58a7206c: Status 404 returned error can't find the container with id 3d6c40641702fbc200089cc3f4353cc05fb2fff0eb5f4933da55566d58a7206c Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.149032 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:45:22 crc kubenswrapper[4832]: E1002 18:45:22.174884 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55c9826_fe7e_4a17_800c_6e45446af3a2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3834871_a53a_40ca_8cae_a908ebe9908b.slice/crio-20ec86f1e91485f1559eef17a29a1560b3c59d5d0b95e05e6dfc883acebcc2f9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b2c05ec_ad79_43c8_8b11_4406770b8875.slice/crio-605591607d90c730b86bbb92bddc817fb6766826c7ad9299f5dbf2a1470a035b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b2c05ec_ad79_43c8_8b11_4406770b8875.slice/crio-conmon-605591607d90c730b86bbb92bddc817fb6766826c7ad9299f5dbf2a1470a035b.scope\": RecentStats: unable to find data in memory cache]" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.610915 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 02 18:45:22 crc kubenswrapper[4832]: W1002 18:45:22.611875 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod726d8b9a_8bc1_4cea_a65f_e494847a6b72.slice/crio-de553126ff3f66a5b994613905ff565beb5aa0ff6377bb13685828d22b994ef9 WatchSource:0}: Error finding container de553126ff3f66a5b994613905ff565beb5aa0ff6377bb13685828d22b994ef9: Status 404 returned error can't find the container with id de553126ff3f66a5b994613905ff565beb5aa0ff6377bb13685828d22b994ef9 Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.703438 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.809553 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3834871-a53a-40ca-8cae-a908ebe9908b-log-httpd\") pod \"a3834871-a53a-40ca-8cae-a908ebe9908b\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.809637 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-sg-core-conf-yaml\") pod \"a3834871-a53a-40ca-8cae-a908ebe9908b\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.809742 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-scripts\") pod \"a3834871-a53a-40ca-8cae-a908ebe9908b\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.809821 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4qwz\" (UniqueName: \"kubernetes.io/projected/a3834871-a53a-40ca-8cae-a908ebe9908b-kube-api-access-l4qwz\") pod \"a3834871-a53a-40ca-8cae-a908ebe9908b\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.809950 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-config-data\") pod \"a3834871-a53a-40ca-8cae-a908ebe9908b\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.810042 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3834871-a53a-40ca-8cae-a908ebe9908b-run-httpd\") pod \"a3834871-a53a-40ca-8cae-a908ebe9908b\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.810100 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-combined-ca-bundle\") pod \"a3834871-a53a-40ca-8cae-a908ebe9908b\" (UID: \"a3834871-a53a-40ca-8cae-a908ebe9908b\") " Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.811162 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3834871-a53a-40ca-8cae-a908ebe9908b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a3834871-a53a-40ca-8cae-a908ebe9908b" (UID: "a3834871-a53a-40ca-8cae-a908ebe9908b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.811976 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3834871-a53a-40ca-8cae-a908ebe9908b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a3834871-a53a-40ca-8cae-a908ebe9908b" (UID: "a3834871-a53a-40ca-8cae-a908ebe9908b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.813802 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-scripts" (OuterVolumeSpecName: "scripts") pod "a3834871-a53a-40ca-8cae-a908ebe9908b" (UID: "a3834871-a53a-40ca-8cae-a908ebe9908b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.815171 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3834871-a53a-40ca-8cae-a908ebe9908b-kube-api-access-l4qwz" (OuterVolumeSpecName: "kube-api-access-l4qwz") pod "a3834871-a53a-40ca-8cae-a908ebe9908b" (UID: "a3834871-a53a-40ca-8cae-a908ebe9908b"). InnerVolumeSpecName "kube-api-access-l4qwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.858496 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a3834871-a53a-40ca-8cae-a908ebe9908b" (UID: "a3834871-a53a-40ca-8cae-a908ebe9908b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.912991 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3834871-a53a-40ca-8cae-a908ebe9908b-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.913296 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.913311 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.913322 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4qwz\" (UniqueName: \"kubernetes.io/projected/a3834871-a53a-40ca-8cae-a908ebe9908b-kube-api-access-l4qwz\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.913334 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3834871-a53a-40ca-8cae-a908ebe9908b-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.929595 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3834871-a53a-40ca-8cae-a908ebe9908b" (UID: "a3834871-a53a-40ca-8cae-a908ebe9908b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.950427 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-config-data" (OuterVolumeSpecName: "config-data") pod "a3834871-a53a-40ca-8cae-a908ebe9908b" (UID: "a3834871-a53a-40ca-8cae-a908ebe9908b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.973138 4832 generic.go:334] "Generic (PLEG): container finished" podID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerID="605591607d90c730b86bbb92bddc817fb6766826c7ad9299f5dbf2a1470a035b" exitCode=0 Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.973205 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c2sx" event={"ID":"2b2c05ec-ad79-43c8-8b11-4406770b8875","Type":"ContainerDied","Data":"605591607d90c730b86bbb92bddc817fb6766826c7ad9299f5dbf2a1470a035b"} Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.974158 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"726d8b9a-8bc1-4cea-a65f-e494847a6b72","Type":"ContainerStarted","Data":"de553126ff3f66a5b994613905ff565beb5aa0ff6377bb13685828d22b994ef9"} Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.979688 4832 generic.go:334] "Generic (PLEG): container finished" podID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerID="20ec86f1e91485f1559eef17a29a1560b3c59d5d0b95e05e6dfc883acebcc2f9" exitCode=0 Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.979743 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3834871-a53a-40ca-8cae-a908ebe9908b","Type":"ContainerDied","Data":"20ec86f1e91485f1559eef17a29a1560b3c59d5d0b95e05e6dfc883acebcc2f9"} Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.979770 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3834871-a53a-40ca-8cae-a908ebe9908b","Type":"ContainerDied","Data":"849626be761fd22fb55c1b7e582eaf69b377f7bd1ceb5155fcc837429a978954"} Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.979788 4832 scope.go:117] "RemoveContainer" containerID="e8eeba79cf2cc3a8aef35105c8b1a6bc746518c179a315728eb5d769d31d5a14" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.979956 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.983280 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"57a27d75-be0a-495c-bbef-8cafcd375848","Type":"ContainerStarted","Data":"e1ed127739b9f8fdf1d8b73c2c4c091aee8ab786b209204be6961bccf923fc83"} Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.983333 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"57a27d75-be0a-495c-bbef-8cafcd375848","Type":"ContainerStarted","Data":"c8f6ff065b7394d3c781c1bff19505fe5b942954153a64e8ab221881a4b7b22d"} Oct 02 18:45:22 crc kubenswrapper[4832]: I1002 18:45:22.983346 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"57a27d75-be0a-495c-bbef-8cafcd375848","Type":"ContainerStarted","Data":"3d6c40641702fbc200089cc3f4353cc05fb2fff0eb5f4933da55566d58a7206c"} Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.006874 4832 scope.go:117] "RemoveContainer" containerID="c6f89450ccd21aa3ebae9792253c06ddf87a1d4b5b8d9c7664404ce48c8f5d97" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.014921 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.014972 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3834871-a53a-40ca-8cae-a908ebe9908b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.030360 4832 scope.go:117] "RemoveContainer" containerID="20ec86f1e91485f1559eef17a29a1560b3c59d5d0b95e05e6dfc883acebcc2f9" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.045006 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.044987175 podStartE2EDuration="3.044987175s" podCreationTimestamp="2025-10-02 18:45:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:45:23.026988811 +0000 UTC m=+1479.996431693" watchObservedRunningTime="2025-10-02 18:45:23.044987175 +0000 UTC m=+1480.014430047" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.069795 4832 scope.go:117] "RemoveContainer" containerID="ce45f741a7d3233f464adb945e2cba071eb3292ad3725582be9b3b052f6851e6" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.073770 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.093906 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.106335 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:45:23 crc kubenswrapper[4832]: E1002 18:45:23.106971 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="proxy-httpd" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.106992 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="proxy-httpd" Oct 02 18:45:23 crc kubenswrapper[4832]: E1002 18:45:23.107022 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="sg-core" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.107031 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="sg-core" Oct 02 18:45:23 crc kubenswrapper[4832]: E1002 18:45:23.107044 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="ceilometer-notification-agent" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.107051 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="ceilometer-notification-agent" Oct 02 18:45:23 crc kubenswrapper[4832]: E1002 18:45:23.107069 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="ceilometer-central-agent" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.107075 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="ceilometer-central-agent" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.107379 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="sg-core" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.107400 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="ceilometer-notification-agent" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.107416 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="ceilometer-central-agent" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.107433 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" containerName="proxy-httpd" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.107415 4832 scope.go:117] "RemoveContainer" containerID="e8eeba79cf2cc3a8aef35105c8b1a6bc746518c179a315728eb5d769d31d5a14" Oct 02 18:45:23 crc kubenswrapper[4832]: E1002 18:45:23.108888 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8eeba79cf2cc3a8aef35105c8b1a6bc746518c179a315728eb5d769d31d5a14\": container with ID starting with e8eeba79cf2cc3a8aef35105c8b1a6bc746518c179a315728eb5d769d31d5a14 not found: ID does not exist" containerID="e8eeba79cf2cc3a8aef35105c8b1a6bc746518c179a315728eb5d769d31d5a14" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.108934 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8eeba79cf2cc3a8aef35105c8b1a6bc746518c179a315728eb5d769d31d5a14"} err="failed to get container status \"e8eeba79cf2cc3a8aef35105c8b1a6bc746518c179a315728eb5d769d31d5a14\": rpc error: code = NotFound desc = could not find container \"e8eeba79cf2cc3a8aef35105c8b1a6bc746518c179a315728eb5d769d31d5a14\": container with ID starting with e8eeba79cf2cc3a8aef35105c8b1a6bc746518c179a315728eb5d769d31d5a14 not found: ID does not exist" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.108964 4832 scope.go:117] "RemoveContainer" containerID="c6f89450ccd21aa3ebae9792253c06ddf87a1d4b5b8d9c7664404ce48c8f5d97" Oct 02 18:45:23 crc kubenswrapper[4832]: E1002 18:45:23.109272 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f89450ccd21aa3ebae9792253c06ddf87a1d4b5b8d9c7664404ce48c8f5d97\": container with ID starting with c6f89450ccd21aa3ebae9792253c06ddf87a1d4b5b8d9c7664404ce48c8f5d97 not found: ID does not exist" containerID="c6f89450ccd21aa3ebae9792253c06ddf87a1d4b5b8d9c7664404ce48c8f5d97" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.109316 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f89450ccd21aa3ebae9792253c06ddf87a1d4b5b8d9c7664404ce48c8f5d97"} err="failed to get container status \"c6f89450ccd21aa3ebae9792253c06ddf87a1d4b5b8d9c7664404ce48c8f5d97\": rpc error: code = NotFound desc = could not find container \"c6f89450ccd21aa3ebae9792253c06ddf87a1d4b5b8d9c7664404ce48c8f5d97\": container with ID starting with c6f89450ccd21aa3ebae9792253c06ddf87a1d4b5b8d9c7664404ce48c8f5d97 not found: ID does not exist" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.109339 4832 scope.go:117] "RemoveContainer" containerID="20ec86f1e91485f1559eef17a29a1560b3c59d5d0b95e05e6dfc883acebcc2f9" Oct 02 18:45:23 crc kubenswrapper[4832]: E1002 18:45:23.109639 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20ec86f1e91485f1559eef17a29a1560b3c59d5d0b95e05e6dfc883acebcc2f9\": container with ID starting with 20ec86f1e91485f1559eef17a29a1560b3c59d5d0b95e05e6dfc883acebcc2f9 not found: ID does not exist" containerID="20ec86f1e91485f1559eef17a29a1560b3c59d5d0b95e05e6dfc883acebcc2f9" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.109666 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20ec86f1e91485f1559eef17a29a1560b3c59d5d0b95e05e6dfc883acebcc2f9"} err="failed to get container status \"20ec86f1e91485f1559eef17a29a1560b3c59d5d0b95e05e6dfc883acebcc2f9\": rpc error: code = NotFound desc = could not find container \"20ec86f1e91485f1559eef17a29a1560b3c59d5d0b95e05e6dfc883acebcc2f9\": container with ID starting with 20ec86f1e91485f1559eef17a29a1560b3c59d5d0b95e05e6dfc883acebcc2f9 not found: ID does not exist" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.109685 4832 scope.go:117] "RemoveContainer" containerID="ce45f741a7d3233f464adb945e2cba071eb3292ad3725582be9b3b052f6851e6" Oct 02 18:45:23 crc kubenswrapper[4832]: E1002 18:45:23.109946 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce45f741a7d3233f464adb945e2cba071eb3292ad3725582be9b3b052f6851e6\": container with ID starting with ce45f741a7d3233f464adb945e2cba071eb3292ad3725582be9b3b052f6851e6 not found: ID does not exist" containerID="ce45f741a7d3233f464adb945e2cba071eb3292ad3725582be9b3b052f6851e6" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.109971 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce45f741a7d3233f464adb945e2cba071eb3292ad3725582be9b3b052f6851e6"} err="failed to get container status \"ce45f741a7d3233f464adb945e2cba071eb3292ad3725582be9b3b052f6851e6\": rpc error: code = NotFound desc = could not find container \"ce45f741a7d3233f464adb945e2cba071eb3292ad3725582be9b3b052f6851e6\": container with ID starting with ce45f741a7d3233f464adb945e2cba071eb3292ad3725582be9b3b052f6851e6 not found: ID does not exist" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.109997 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.114419 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.115154 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.116921 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.219116 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzjs7\" (UniqueName: \"kubernetes.io/projected/1fca7071-4799-4d4d-b132-4bea35f0aa6c-kube-api-access-rzjs7\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.219186 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1fca7071-4799-4d4d-b132-4bea35f0aa6c-run-httpd\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.219363 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.219549 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1fca7071-4799-4d4d-b132-4bea35f0aa6c-log-httpd\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.219663 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.219858 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-scripts\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.220405 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-config-data\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.235133 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d861870-773c-4994-b599-62adec02a99a" path="/var/lib/kubelet/pods/9d861870-773c-4994-b599-62adec02a99a/volumes" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.235879 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3834871-a53a-40ca-8cae-a908ebe9908b" path="/var/lib/kubelet/pods/a3834871-a53a-40ca-8cae-a908ebe9908b/volumes" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.323347 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-config-data\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.323443 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzjs7\" (UniqueName: \"kubernetes.io/projected/1fca7071-4799-4d4d-b132-4bea35f0aa6c-kube-api-access-rzjs7\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.323496 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1fca7071-4799-4d4d-b132-4bea35f0aa6c-run-httpd\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.323727 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.323842 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1fca7071-4799-4d4d-b132-4bea35f0aa6c-log-httpd\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.323902 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.324031 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1fca7071-4799-4d4d-b132-4bea35f0aa6c-run-httpd\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.324146 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-scripts\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.324306 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1fca7071-4799-4d4d-b132-4bea35f0aa6c-log-httpd\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.330237 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.330570 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.332099 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-config-data\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.332824 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-scripts\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.340711 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzjs7\" (UniqueName: \"kubernetes.io/projected/1fca7071-4799-4d4d-b132-4bea35f0aa6c-kube-api-access-rzjs7\") pod \"ceilometer-0\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.458233 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.493531 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.574745 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-knxks"] Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.575352 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-knxks" podUID="09048752-dbc7-4dd6-98c9-c74b48acf66d" containerName="dnsmasq-dns" containerID="cri-o://bec0a71cb6e0992991d837b29640a3f4051f0be9bdd55e136f188d6d9f997570" gracePeriod=10 Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.995953 4832 generic.go:334] "Generic (PLEG): container finished" podID="09048752-dbc7-4dd6-98c9-c74b48acf66d" containerID="bec0a71cb6e0992991d837b29640a3f4051f0be9bdd55e136f188d6d9f997570" exitCode=0 Oct 02 18:45:23 crc kubenswrapper[4832]: I1002 18:45:23.996018 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-knxks" event={"ID":"09048752-dbc7-4dd6-98c9-c74b48acf66d","Type":"ContainerDied","Data":"bec0a71cb6e0992991d837b29640a3f4051f0be9bdd55e136f188d6d9f997570"} Oct 02 18:45:24 crc kubenswrapper[4832]: W1002 18:45:24.079134 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fca7071_4799_4d4d_b132_4bea35f0aa6c.slice/crio-1b471d942df67222e9ca467bb47a8511b1c2fad9e95ed84d8a863fa0c3922689 WatchSource:0}: Error finding container 1b471d942df67222e9ca467bb47a8511b1c2fad9e95ed84d8a863fa0c3922689: Status 404 returned error can't find the container with id 1b471d942df67222e9ca467bb47a8511b1c2fad9e95ed84d8a863fa0c3922689 Oct 02 18:45:24 crc kubenswrapper[4832]: I1002 18:45:24.089042 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:45:24 crc kubenswrapper[4832]: I1002 18:45:24.301035 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-9b86998b5-knxks" podUID="09048752-dbc7-4dd6-98c9-c74b48acf66d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.239:5353: connect: connection refused" Oct 02 18:45:24 crc kubenswrapper[4832]: E1002 18:45:24.319570 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55c9826_fe7e_4a17_800c_6e45446af3a2.slice\": RecentStats: unable to find data in memory cache]" Oct 02 18:45:24 crc kubenswrapper[4832]: I1002 18:45:24.913321 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:45:24 crc kubenswrapper[4832]: I1002 18:45:24.986198 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-dns-svc\") pod \"09048752-dbc7-4dd6-98c9-c74b48acf66d\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " Oct 02 18:45:24 crc kubenswrapper[4832]: I1002 18:45:24.986682 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-ovsdbserver-nb\") pod \"09048752-dbc7-4dd6-98c9-c74b48acf66d\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " Oct 02 18:45:24 crc kubenswrapper[4832]: I1002 18:45:24.986742 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2nz5\" (UniqueName: \"kubernetes.io/projected/09048752-dbc7-4dd6-98c9-c74b48acf66d-kube-api-access-f2nz5\") pod \"09048752-dbc7-4dd6-98c9-c74b48acf66d\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.005120 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-ovsdbserver-sb\") pod \"09048752-dbc7-4dd6-98c9-c74b48acf66d\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.005314 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-dns-swift-storage-0\") pod \"09048752-dbc7-4dd6-98c9-c74b48acf66d\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.005355 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-config\") pod \"09048752-dbc7-4dd6-98c9-c74b48acf66d\" (UID: \"09048752-dbc7-4dd6-98c9-c74b48acf66d\") " Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.017546 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09048752-dbc7-4dd6-98c9-c74b48acf66d-kube-api-access-f2nz5" (OuterVolumeSpecName: "kube-api-access-f2nz5") pod "09048752-dbc7-4dd6-98c9-c74b48acf66d" (UID: "09048752-dbc7-4dd6-98c9-c74b48acf66d"). InnerVolumeSpecName "kube-api-access-f2nz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.037482 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-knxks" event={"ID":"09048752-dbc7-4dd6-98c9-c74b48acf66d","Type":"ContainerDied","Data":"5d84d8ed60e6f5631f9f073fb80e519d35ede3f150342746106ec8896b93659b"} Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.037782 4832 scope.go:117] "RemoveContainer" containerID="bec0a71cb6e0992991d837b29640a3f4051f0be9bdd55e136f188d6d9f997570" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.037495 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-knxks" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.044418 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1fca7071-4799-4d4d-b132-4bea35f0aa6c","Type":"ContainerStarted","Data":"1b471d942df67222e9ca467bb47a8511b1c2fad9e95ed84d8a863fa0c3922689"} Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.067118 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09048752-dbc7-4dd6-98c9-c74b48acf66d" (UID: "09048752-dbc7-4dd6-98c9-c74b48acf66d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.067163 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "09048752-dbc7-4dd6-98c9-c74b48acf66d" (UID: "09048752-dbc7-4dd6-98c9-c74b48acf66d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.070428 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09048752-dbc7-4dd6-98c9-c74b48acf66d" (UID: "09048752-dbc7-4dd6-98c9-c74b48acf66d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.084235 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-config" (OuterVolumeSpecName: "config") pod "09048752-dbc7-4dd6-98c9-c74b48acf66d" (UID: "09048752-dbc7-4dd6-98c9-c74b48acf66d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.111814 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.112120 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.112204 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.112312 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2nz5\" (UniqueName: \"kubernetes.io/projected/09048752-dbc7-4dd6-98c9-c74b48acf66d-kube-api-access-f2nz5\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.113030 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.116032 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09048752-dbc7-4dd6-98c9-c74b48acf66d" (UID: "09048752-dbc7-4dd6-98c9-c74b48acf66d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.215529 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09048752-dbc7-4dd6-98c9-c74b48acf66d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.367673 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-knxks"] Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.379314 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-knxks"] Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.420835 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.525758 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:25 crc kubenswrapper[4832]: I1002 18:45:25.673481 4832 scope.go:117] "RemoveContainer" containerID="3553efff0e8a732fa0f4252dd5919a2ab7477ff2f33f3a737307b0359397c720" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.082383 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.334135 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lvkb2"] Oct 02 18:45:26 crc kubenswrapper[4832]: E1002 18:45:26.334662 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09048752-dbc7-4dd6-98c9-c74b48acf66d" containerName="dnsmasq-dns" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.334676 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="09048752-dbc7-4dd6-98c9-c74b48acf66d" containerName="dnsmasq-dns" Oct 02 18:45:26 crc kubenswrapper[4832]: E1002 18:45:26.334700 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09048752-dbc7-4dd6-98c9-c74b48acf66d" containerName="init" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.334705 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="09048752-dbc7-4dd6-98c9-c74b48acf66d" containerName="init" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.334951 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="09048752-dbc7-4dd6-98c9-c74b48acf66d" containerName="dnsmasq-dns" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.336147 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.338854 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.338933 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.362393 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lvkb2"] Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.455325 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klvhh\" (UniqueName: \"kubernetes.io/projected/e6400222-5886-46c9-8018-4767737c3d12-kube-api-access-klvhh\") pod \"nova-cell1-cell-mapping-lvkb2\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.455591 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-scripts\") pod \"nova-cell1-cell-mapping-lvkb2\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.455646 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lvkb2\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.455790 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-config-data\") pod \"nova-cell1-cell-mapping-lvkb2\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.558847 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-scripts\") pod \"nova-cell1-cell-mapping-lvkb2\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.558909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lvkb2\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.558972 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-config-data\") pod \"nova-cell1-cell-mapping-lvkb2\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.559179 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klvhh\" (UniqueName: \"kubernetes.io/projected/e6400222-5886-46c9-8018-4767737c3d12-kube-api-access-klvhh\") pod \"nova-cell1-cell-mapping-lvkb2\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.565438 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lvkb2\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.565682 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-config-data\") pod \"nova-cell1-cell-mapping-lvkb2\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.565902 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-scripts\") pod \"nova-cell1-cell-mapping-lvkb2\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.583136 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klvhh\" (UniqueName: \"kubernetes.io/projected/e6400222-5886-46c9-8018-4767737c3d12-kube-api-access-klvhh\") pod \"nova-cell1-cell-mapping-lvkb2\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.657504 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.876173 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.876545 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.876587 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.877519 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d11c7dd5e816b980d09e31f34fc920edcbd862f94c306350838f9fcadaa3f9f6"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 18:45:26 crc kubenswrapper[4832]: I1002 18:45:26.877579 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://d11c7dd5e816b980d09e31f34fc920edcbd862f94c306350838f9fcadaa3f9f6" gracePeriod=600 Oct 02 18:45:27 crc kubenswrapper[4832]: I1002 18:45:27.080983 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"726d8b9a-8bc1-4cea-a65f-e494847a6b72","Type":"ContainerStarted","Data":"346ffe19709bf006fe8f1548ef5133878fe707aa29eb63fcba950e877d3446f6"} Oct 02 18:45:27 crc kubenswrapper[4832]: I1002 18:45:27.084999 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1fca7071-4799-4d4d-b132-4bea35f0aa6c","Type":"ContainerStarted","Data":"925b46e346050b61415af1f1ba183371cc67b485b883b9b62d27c4b4b76faab6"} Oct 02 18:45:27 crc kubenswrapper[4832]: I1002 18:45:27.088593 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c2sx" event={"ID":"2b2c05ec-ad79-43c8-8b11-4406770b8875","Type":"ContainerStarted","Data":"0e41b8043dbd3e7dbca0c11f79cb0fdaa46220f7f22acda8c6bc595dd4ea4de2"} Oct 02 18:45:27 crc kubenswrapper[4832]: I1002 18:45:27.117184 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8c2sx" podStartSLOduration=4.122851323 podStartE2EDuration="13.117162335s" podCreationTimestamp="2025-10-02 18:45:14 +0000 UTC" firstStartedPulling="2025-10-02 18:45:16.882592228 +0000 UTC m=+1473.852035100" lastFinishedPulling="2025-10-02 18:45:25.87690324 +0000 UTC m=+1482.846346112" observedRunningTime="2025-10-02 18:45:27.10885973 +0000 UTC m=+1484.078302602" watchObservedRunningTime="2025-10-02 18:45:27.117162335 +0000 UTC m=+1484.086605207" Oct 02 18:45:27 crc kubenswrapper[4832]: W1002 18:45:27.225307 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6400222_5886_46c9_8018_4767737c3d12.slice/crio-7b9c53e8c1b22c57f74d9ca7f7eb4f36d53ae28223539e079cbabef0231153e6 WatchSource:0}: Error finding container 7b9c53e8c1b22c57f74d9ca7f7eb4f36d53ae28223539e079cbabef0231153e6: Status 404 returned error can't find the container with id 7b9c53e8c1b22c57f74d9ca7f7eb4f36d53ae28223539e079cbabef0231153e6 Oct 02 18:45:27 crc kubenswrapper[4832]: I1002 18:45:27.249721 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09048752-dbc7-4dd6-98c9-c74b48acf66d" path="/var/lib/kubelet/pods/09048752-dbc7-4dd6-98c9-c74b48acf66d/volumes" Oct 02 18:45:27 crc kubenswrapper[4832]: I1002 18:45:27.254254 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lvkb2"] Oct 02 18:45:28 crc kubenswrapper[4832]: I1002 18:45:28.105932 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="d11c7dd5e816b980d09e31f34fc920edcbd862f94c306350838f9fcadaa3f9f6" exitCode=0 Oct 02 18:45:28 crc kubenswrapper[4832]: I1002 18:45:28.106552 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"d11c7dd5e816b980d09e31f34fc920edcbd862f94c306350838f9fcadaa3f9f6"} Oct 02 18:45:28 crc kubenswrapper[4832]: I1002 18:45:28.106588 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c"} Oct 02 18:45:28 crc kubenswrapper[4832]: I1002 18:45:28.106611 4832 scope.go:117] "RemoveContainer" containerID="c688f74c22f81ea3d61106cf0c7f62698937fd7d0fad6673fa18a7fd31c7b079" Oct 02 18:45:28 crc kubenswrapper[4832]: I1002 18:45:28.116490 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"726d8b9a-8bc1-4cea-a65f-e494847a6b72","Type":"ContainerStarted","Data":"8ad03b05da840fe55c4236656bf2202fa604cf9de5ef9296fc4551bbbcbb752d"} Oct 02 18:45:28 crc kubenswrapper[4832]: I1002 18:45:28.119993 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1fca7071-4799-4d4d-b132-4bea35f0aa6c","Type":"ContainerStarted","Data":"4157d2c29db901daefa845012da4fd9b9231646864999299b5fd725c083ba873"} Oct 02 18:45:28 crc kubenswrapper[4832]: I1002 18:45:28.128131 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lvkb2" event={"ID":"e6400222-5886-46c9-8018-4767737c3d12","Type":"ContainerStarted","Data":"8f6060e1226594f5c9c4e91fed95b13761c9c5727a71c44f8e43a6fdf31695cc"} Oct 02 18:45:28 crc kubenswrapper[4832]: I1002 18:45:28.128223 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lvkb2" event={"ID":"e6400222-5886-46c9-8018-4767737c3d12","Type":"ContainerStarted","Data":"7b9c53e8c1b22c57f74d9ca7f7eb4f36d53ae28223539e079cbabef0231153e6"} Oct 02 18:45:28 crc kubenswrapper[4832]: I1002 18:45:28.149620 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lvkb2" podStartSLOduration=2.149602038 podStartE2EDuration="2.149602038s" podCreationTimestamp="2025-10-02 18:45:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:45:28.148484204 +0000 UTC m=+1485.117927076" watchObservedRunningTime="2025-10-02 18:45:28.149602038 +0000 UTC m=+1485.119044920" Oct 02 18:45:29 crc kubenswrapper[4832]: I1002 18:45:29.149763 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"726d8b9a-8bc1-4cea-a65f-e494847a6b72","Type":"ContainerStarted","Data":"e93da71715267eee779591d3111d9ffb8027577ad4001642a0de7ef82272ee0f"} Oct 02 18:45:29 crc kubenswrapper[4832]: I1002 18:45:29.153858 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1fca7071-4799-4d4d-b132-4bea35f0aa6c","Type":"ContainerStarted","Data":"ba5bede63a0d789d49fd8a929b9db3badbb4fab57c3367f0fa42870090d4e13e"} Oct 02 18:45:30 crc kubenswrapper[4832]: I1002 18:45:30.179164 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"726d8b9a-8bc1-4cea-a65f-e494847a6b72","Type":"ContainerStarted","Data":"a8c319a783234ef8e46e550d557694d921e9cc7d1f37f2efa7c64639efd71036"} Oct 02 18:45:30 crc kubenswrapper[4832]: I1002 18:45:30.209067 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.422289067 podStartE2EDuration="9.209053508s" podCreationTimestamp="2025-10-02 18:45:21 +0000 UTC" firstStartedPulling="2025-10-02 18:45:22.614198695 +0000 UTC m=+1479.583641567" lastFinishedPulling="2025-10-02 18:45:29.400963126 +0000 UTC m=+1486.370406008" observedRunningTime="2025-10-02 18:45:30.201497006 +0000 UTC m=+1487.170939878" watchObservedRunningTime="2025-10-02 18:45:30.209053508 +0000 UTC m=+1487.178496380" Oct 02 18:45:31 crc kubenswrapper[4832]: I1002 18:45:31.190973 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1fca7071-4799-4d4d-b132-4bea35f0aa6c","Type":"ContainerStarted","Data":"32fa0b6461d4fbf2a74a93583c3ceda1bd4fc08cf81e40cabd8f3a40e550edef"} Oct 02 18:45:31 crc kubenswrapper[4832]: I1002 18:45:31.223224 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.17279162 podStartE2EDuration="8.223204418s" podCreationTimestamp="2025-10-02 18:45:23 +0000 UTC" firstStartedPulling="2025-10-02 18:45:24.081559266 +0000 UTC m=+1481.051002138" lastFinishedPulling="2025-10-02 18:45:30.131972064 +0000 UTC m=+1487.101414936" observedRunningTime="2025-10-02 18:45:31.214411647 +0000 UTC m=+1488.183854519" watchObservedRunningTime="2025-10-02 18:45:31.223204418 +0000 UTC m=+1488.192647290" Oct 02 18:45:31 crc kubenswrapper[4832]: I1002 18:45:31.505547 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 02 18:45:31 crc kubenswrapper[4832]: I1002 18:45:31.505787 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 02 18:45:32 crc kubenswrapper[4832]: I1002 18:45:32.208198 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 02 18:45:32 crc kubenswrapper[4832]: I1002 18:45:32.512632 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="57a27d75-be0a-495c-bbef-8cafcd375848" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.251:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 02 18:45:32 crc kubenswrapper[4832]: I1002 18:45:32.512636 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="57a27d75-be0a-495c-bbef-8cafcd375848" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.251:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 02 18:45:33 crc kubenswrapper[4832]: I1002 18:45:33.222528 4832 generic.go:334] "Generic (PLEG): container finished" podID="e6400222-5886-46c9-8018-4767737c3d12" containerID="8f6060e1226594f5c9c4e91fed95b13761c9c5727a71c44f8e43a6fdf31695cc" exitCode=0 Oct 02 18:45:33 crc kubenswrapper[4832]: I1002 18:45:33.248464 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lvkb2" event={"ID":"e6400222-5886-46c9-8018-4767737c3d12","Type":"ContainerDied","Data":"8f6060e1226594f5c9c4e91fed95b13761c9c5727a71c44f8e43a6fdf31695cc"} Oct 02 18:45:34 crc kubenswrapper[4832]: E1002 18:45:34.673185 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55c9826_fe7e_4a17_800c_6e45446af3a2.slice\": RecentStats: unable to find data in memory cache]" Oct 02 18:45:34 crc kubenswrapper[4832]: I1002 18:45:34.834000 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:34 crc kubenswrapper[4832]: I1002 18:45:34.865283 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-config-data\") pod \"e6400222-5886-46c9-8018-4767737c3d12\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " Oct 02 18:45:34 crc kubenswrapper[4832]: I1002 18:45:34.865768 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-combined-ca-bundle\") pod \"e6400222-5886-46c9-8018-4767737c3d12\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " Oct 02 18:45:34 crc kubenswrapper[4832]: I1002 18:45:34.865981 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klvhh\" (UniqueName: \"kubernetes.io/projected/e6400222-5886-46c9-8018-4767737c3d12-kube-api-access-klvhh\") pod \"e6400222-5886-46c9-8018-4767737c3d12\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " Oct 02 18:45:34 crc kubenswrapper[4832]: I1002 18:45:34.866146 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-scripts\") pod \"e6400222-5886-46c9-8018-4767737c3d12\" (UID: \"e6400222-5886-46c9-8018-4767737c3d12\") " Oct 02 18:45:34 crc kubenswrapper[4832]: I1002 18:45:34.877323 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-scripts" (OuterVolumeSpecName: "scripts") pod "e6400222-5886-46c9-8018-4767737c3d12" (UID: "e6400222-5886-46c9-8018-4767737c3d12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:34 crc kubenswrapper[4832]: I1002 18:45:34.881982 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6400222-5886-46c9-8018-4767737c3d12-kube-api-access-klvhh" (OuterVolumeSpecName: "kube-api-access-klvhh") pod "e6400222-5886-46c9-8018-4767737c3d12" (UID: "e6400222-5886-46c9-8018-4767737c3d12"). InnerVolumeSpecName "kube-api-access-klvhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:45:34 crc kubenswrapper[4832]: I1002 18:45:34.910498 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6400222-5886-46c9-8018-4767737c3d12" (UID: "e6400222-5886-46c9-8018-4767737c3d12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:34 crc kubenswrapper[4832]: I1002 18:45:34.935990 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-config-data" (OuterVolumeSpecName: "config-data") pod "e6400222-5886-46c9-8018-4767737c3d12" (UID: "e6400222-5886-46c9-8018-4767737c3d12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:34 crc kubenswrapper[4832]: I1002 18:45:34.971626 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:34 crc kubenswrapper[4832]: I1002 18:45:34.971659 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:34 crc kubenswrapper[4832]: I1002 18:45:34.971675 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6400222-5886-46c9-8018-4767737c3d12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:34 crc kubenswrapper[4832]: I1002 18:45:34.971688 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klvhh\" (UniqueName: \"kubernetes.io/projected/e6400222-5886-46c9-8018-4767737c3d12-kube-api-access-klvhh\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:35 crc kubenswrapper[4832]: I1002 18:45:35.245919 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lvkb2" event={"ID":"e6400222-5886-46c9-8018-4767737c3d12","Type":"ContainerDied","Data":"7b9c53e8c1b22c57f74d9ca7f7eb4f36d53ae28223539e079cbabef0231153e6"} Oct 02 18:45:35 crc kubenswrapper[4832]: I1002 18:45:35.245967 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b9c53e8c1b22c57f74d9ca7f7eb4f36d53ae28223539e079cbabef0231153e6" Oct 02 18:45:35 crc kubenswrapper[4832]: I1002 18:45:35.245974 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lvkb2" Oct 02 18:45:35 crc kubenswrapper[4832]: I1002 18:45:35.263781 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:45:35 crc kubenswrapper[4832]: I1002 18:45:35.263830 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:45:35 crc kubenswrapper[4832]: I1002 18:45:35.485606 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:45:35 crc kubenswrapper[4832]: I1002 18:45:35.485871 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8377df42-e617-42e2-ace4-d085f917e879" containerName="nova-scheduler-scheduler" containerID="cri-o://6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132" gracePeriod=30 Oct 02 18:45:35 crc kubenswrapper[4832]: I1002 18:45:35.524799 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:45:35 crc kubenswrapper[4832]: I1002 18:45:35.525489 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="57a27d75-be0a-495c-bbef-8cafcd375848" containerName="nova-api-log" containerID="cri-o://c8f6ff065b7394d3c781c1bff19505fe5b942954153a64e8ab221881a4b7b22d" gracePeriod=30 Oct 02 18:45:35 crc kubenswrapper[4832]: I1002 18:45:35.525645 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="57a27d75-be0a-495c-bbef-8cafcd375848" containerName="nova-api-api" containerID="cri-o://e1ed127739b9f8fdf1d8b73c2c4c091aee8ab786b209204be6961bccf923fc83" gracePeriod=30 Oct 02 18:45:35 crc kubenswrapper[4832]: I1002 18:45:35.544722 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:45:35 crc kubenswrapper[4832]: I1002 18:45:35.545057 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" containerName="nova-metadata-log" containerID="cri-o://588d8197cd3f09be3d74fdd818dfb51c766379cbee324af3407a034d9f96f36c" gracePeriod=30 Oct 02 18:45:35 crc kubenswrapper[4832]: I1002 18:45:35.545703 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" containerName="nova-metadata-metadata" containerID="cri-o://ed8497c8a30f8959beec1ddecf48fd264cf70e1ce42f54f6c86c674196ad9640" gracePeriod=30 Oct 02 18:45:36 crc kubenswrapper[4832]: I1002 18:45:36.257351 4832 generic.go:334] "Generic (PLEG): container finished" podID="57a27d75-be0a-495c-bbef-8cafcd375848" containerID="c8f6ff065b7394d3c781c1bff19505fe5b942954153a64e8ab221881a4b7b22d" exitCode=143 Oct 02 18:45:36 crc kubenswrapper[4832]: I1002 18:45:36.257440 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"57a27d75-be0a-495c-bbef-8cafcd375848","Type":"ContainerDied","Data":"c8f6ff065b7394d3c781c1bff19505fe5b942954153a64e8ab221881a4b7b22d"} Oct 02 18:45:36 crc kubenswrapper[4832]: I1002 18:45:36.260319 4832 generic.go:334] "Generic (PLEG): container finished" podID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" containerID="588d8197cd3f09be3d74fdd818dfb51c766379cbee324af3407a034d9f96f36c" exitCode=143 Oct 02 18:45:36 crc kubenswrapper[4832]: I1002 18:45:36.260406 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8b5344e-00d4-4ad7-9dd1-176828d155cc","Type":"ContainerDied","Data":"588d8197cd3f09be3d74fdd818dfb51c766379cbee324af3407a034d9f96f36c"} Oct 02 18:45:36 crc kubenswrapper[4832]: I1002 18:45:36.325387 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8c2sx" podUID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerName="registry-server" probeResult="failure" output=< Oct 02 18:45:36 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 18:45:36 crc kubenswrapper[4832]: > Oct 02 18:45:36 crc kubenswrapper[4832]: E1002 18:45:36.729309 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132 is running failed: container process not found" containerID="6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 18:45:36 crc kubenswrapper[4832]: E1002 18:45:36.729840 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132 is running failed: container process not found" containerID="6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 18:45:36 crc kubenswrapper[4832]: E1002 18:45:36.730154 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132 is running failed: container process not found" containerID="6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 02 18:45:36 crc kubenswrapper[4832]: E1002 18:45:36.730228 4832 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8377df42-e617-42e2-ace4-d085f917e879" containerName="nova-scheduler-scheduler" Oct 02 18:45:36 crc kubenswrapper[4832]: E1002 18:45:36.776827 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55c9826_fe7e_4a17_800c_6e45446af3a2.slice\": RecentStats: unable to find data in memory cache]" Oct 02 18:45:36 crc kubenswrapper[4832]: I1002 18:45:36.946233 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.013983 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8377df42-e617-42e2-ace4-d085f917e879-config-data\") pod \"8377df42-e617-42e2-ace4-d085f917e879\" (UID: \"8377df42-e617-42e2-ace4-d085f917e879\") " Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.014044 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8377df42-e617-42e2-ace4-d085f917e879-combined-ca-bundle\") pod \"8377df42-e617-42e2-ace4-d085f917e879\" (UID: \"8377df42-e617-42e2-ace4-d085f917e879\") " Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.014253 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kll92\" (UniqueName: \"kubernetes.io/projected/8377df42-e617-42e2-ace4-d085f917e879-kube-api-access-kll92\") pod \"8377df42-e617-42e2-ace4-d085f917e879\" (UID: \"8377df42-e617-42e2-ace4-d085f917e879\") " Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.020130 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8377df42-e617-42e2-ace4-d085f917e879-kube-api-access-kll92" (OuterVolumeSpecName: "kube-api-access-kll92") pod "8377df42-e617-42e2-ace4-d085f917e879" (UID: "8377df42-e617-42e2-ace4-d085f917e879"). InnerVolumeSpecName "kube-api-access-kll92". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.086408 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8377df42-e617-42e2-ace4-d085f917e879-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8377df42-e617-42e2-ace4-d085f917e879" (UID: "8377df42-e617-42e2-ace4-d085f917e879"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.107132 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8377df42-e617-42e2-ace4-d085f917e879-config-data" (OuterVolumeSpecName: "config-data") pod "8377df42-e617-42e2-ace4-d085f917e879" (UID: "8377df42-e617-42e2-ace4-d085f917e879"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.116864 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kll92\" (UniqueName: \"kubernetes.io/projected/8377df42-e617-42e2-ace4-d085f917e879-kube-api-access-kll92\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.116893 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8377df42-e617-42e2-ace4-d085f917e879-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.116903 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8377df42-e617-42e2-ace4-d085f917e879-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.271161 4832 generic.go:334] "Generic (PLEG): container finished" podID="8377df42-e617-42e2-ace4-d085f917e879" containerID="6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132" exitCode=0 Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.271206 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8377df42-e617-42e2-ace4-d085f917e879","Type":"ContainerDied","Data":"6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132"} Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.271235 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8377df42-e617-42e2-ace4-d085f917e879","Type":"ContainerDied","Data":"aa516adc4fe1ecc60b641d806648b282360f24fec7546e74e0dff554ad3ad0ab"} Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.271243 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.271253 4832 scope.go:117] "RemoveContainer" containerID="6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.292968 4832 scope.go:117] "RemoveContainer" containerID="6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132" Oct 02 18:45:37 crc kubenswrapper[4832]: E1002 18:45:37.293548 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132\": container with ID starting with 6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132 not found: ID does not exist" containerID="6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.293602 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132"} err="failed to get container status \"6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132\": rpc error: code = NotFound desc = could not find container \"6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132\": container with ID starting with 6cdc134107dfbda21e74a8b80816cdafb405cf3efef6f7127c81e64de3bb5132 not found: ID does not exist" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.297821 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.309620 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.319992 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:45:37 crc kubenswrapper[4832]: E1002 18:45:37.320546 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6400222-5886-46c9-8018-4767737c3d12" containerName="nova-manage" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.320566 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6400222-5886-46c9-8018-4767737c3d12" containerName="nova-manage" Oct 02 18:45:37 crc kubenswrapper[4832]: E1002 18:45:37.320593 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8377df42-e617-42e2-ace4-d085f917e879" containerName="nova-scheduler-scheduler" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.320602 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8377df42-e617-42e2-ace4-d085f917e879" containerName="nova-scheduler-scheduler" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.320859 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6400222-5886-46c9-8018-4767737c3d12" containerName="nova-manage" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.320899 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8377df42-e617-42e2-ace4-d085f917e879" containerName="nova-scheduler-scheduler" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.321680 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.323612 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.336738 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.424371 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f93334a-ea76-42f6-9f67-0788fac06f14-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8f93334a-ea76-42f6-9f67-0788fac06f14\") " pod="openstack/nova-scheduler-0" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.424487 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f93334a-ea76-42f6-9f67-0788fac06f14-config-data\") pod \"nova-scheduler-0\" (UID: \"8f93334a-ea76-42f6-9f67-0788fac06f14\") " pod="openstack/nova-scheduler-0" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.424810 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcw5d\" (UniqueName: \"kubernetes.io/projected/8f93334a-ea76-42f6-9f67-0788fac06f14-kube-api-access-zcw5d\") pod \"nova-scheduler-0\" (UID: \"8f93334a-ea76-42f6-9f67-0788fac06f14\") " pod="openstack/nova-scheduler-0" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.527503 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f93334a-ea76-42f6-9f67-0788fac06f14-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8f93334a-ea76-42f6-9f67-0788fac06f14\") " pod="openstack/nova-scheduler-0" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.527666 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f93334a-ea76-42f6-9f67-0788fac06f14-config-data\") pod \"nova-scheduler-0\" (UID: \"8f93334a-ea76-42f6-9f67-0788fac06f14\") " pod="openstack/nova-scheduler-0" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.527901 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcw5d\" (UniqueName: \"kubernetes.io/projected/8f93334a-ea76-42f6-9f67-0788fac06f14-kube-api-access-zcw5d\") pod \"nova-scheduler-0\" (UID: \"8f93334a-ea76-42f6-9f67-0788fac06f14\") " pod="openstack/nova-scheduler-0" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.532639 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f93334a-ea76-42f6-9f67-0788fac06f14-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8f93334a-ea76-42f6-9f67-0788fac06f14\") " pod="openstack/nova-scheduler-0" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.533040 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f93334a-ea76-42f6-9f67-0788fac06f14-config-data\") pod \"nova-scheduler-0\" (UID: \"8f93334a-ea76-42f6-9f67-0788fac06f14\") " pod="openstack/nova-scheduler-0" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.544516 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcw5d\" (UniqueName: \"kubernetes.io/projected/8f93334a-ea76-42f6-9f67-0788fac06f14-kube-api-access-zcw5d\") pod \"nova-scheduler-0\" (UID: \"8f93334a-ea76-42f6-9f67-0788fac06f14\") " pod="openstack/nova-scheduler-0" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.666164 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.681094 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sqxrh"] Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.684052 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.715990 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sqxrh"] Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.738457 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bec9629-2cbf-4111-808b-aa67cb8bd060-utilities\") pod \"community-operators-sqxrh\" (UID: \"1bec9629-2cbf-4111-808b-aa67cb8bd060\") " pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.738600 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bec9629-2cbf-4111-808b-aa67cb8bd060-catalog-content\") pod \"community-operators-sqxrh\" (UID: \"1bec9629-2cbf-4111-808b-aa67cb8bd060\") " pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.738678 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z2d7\" (UniqueName: \"kubernetes.io/projected/1bec9629-2cbf-4111-808b-aa67cb8bd060-kube-api-access-4z2d7\") pod \"community-operators-sqxrh\" (UID: \"1bec9629-2cbf-4111-808b-aa67cb8bd060\") " pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.841953 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bec9629-2cbf-4111-808b-aa67cb8bd060-utilities\") pod \"community-operators-sqxrh\" (UID: \"1bec9629-2cbf-4111-808b-aa67cb8bd060\") " pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.842021 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bec9629-2cbf-4111-808b-aa67cb8bd060-catalog-content\") pod \"community-operators-sqxrh\" (UID: \"1bec9629-2cbf-4111-808b-aa67cb8bd060\") " pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.842104 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z2d7\" (UniqueName: \"kubernetes.io/projected/1bec9629-2cbf-4111-808b-aa67cb8bd060-kube-api-access-4z2d7\") pod \"community-operators-sqxrh\" (UID: \"1bec9629-2cbf-4111-808b-aa67cb8bd060\") " pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.842929 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bec9629-2cbf-4111-808b-aa67cb8bd060-utilities\") pod \"community-operators-sqxrh\" (UID: \"1bec9629-2cbf-4111-808b-aa67cb8bd060\") " pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.843165 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bec9629-2cbf-4111-808b-aa67cb8bd060-catalog-content\") pod \"community-operators-sqxrh\" (UID: \"1bec9629-2cbf-4111-808b-aa67cb8bd060\") " pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:37 crc kubenswrapper[4832]: I1002 18:45:37.866967 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z2d7\" (UniqueName: \"kubernetes.io/projected/1bec9629-2cbf-4111-808b-aa67cb8bd060-kube-api-access-4z2d7\") pod \"community-operators-sqxrh\" (UID: \"1bec9629-2cbf-4111-808b-aa67cb8bd060\") " pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:38 crc kubenswrapper[4832]: I1002 18:45:38.019450 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:38 crc kubenswrapper[4832]: W1002 18:45:38.170455 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f93334a_ea76_42f6_9f67_0788fac06f14.slice/crio-ede5954d385720372970d2fc870d75d031e6ae6ab678884810b125a54d95709f WatchSource:0}: Error finding container ede5954d385720372970d2fc870d75d031e6ae6ab678884810b125a54d95709f: Status 404 returned error can't find the container with id ede5954d385720372970d2fc870d75d031e6ae6ab678884810b125a54d95709f Oct 02 18:45:38 crc kubenswrapper[4832]: I1002 18:45:38.172326 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 02 18:45:38 crc kubenswrapper[4832]: I1002 18:45:38.286272 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8f93334a-ea76-42f6-9f67-0788fac06f14","Type":"ContainerStarted","Data":"ede5954d385720372970d2fc870d75d031e6ae6ab678884810b125a54d95709f"} Oct 02 18:45:38 crc kubenswrapper[4832]: W1002 18:45:38.545290 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bec9629_2cbf_4111_808b_aa67cb8bd060.slice/crio-ff82a851b3a2cee8e33448a5d822ed6a259856d4e288587ee30ab8644859a4c9 WatchSource:0}: Error finding container ff82a851b3a2cee8e33448a5d822ed6a259856d4e288587ee30ab8644859a4c9: Status 404 returned error can't find the container with id ff82a851b3a2cee8e33448a5d822ed6a259856d4e288587ee30ab8644859a4c9 Oct 02 18:45:38 crc kubenswrapper[4832]: I1002 18:45:38.553056 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sqxrh"] Oct 02 18:45:38 crc kubenswrapper[4832]: I1002 18:45:38.690815 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.243:8775/\": read tcp 10.217.0.2:58834->10.217.0.243:8775: read: connection reset by peer" Oct 02 18:45:38 crc kubenswrapper[4832]: I1002 18:45:38.690839 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.243:8775/\": read tcp 10.217.0.2:58848->10.217.0.243:8775: read: connection reset by peer" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.237986 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8377df42-e617-42e2-ace4-d085f917e879" path="/var/lib/kubelet/pods/8377df42-e617-42e2-ace4-d085f917e879/volumes" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.326504 4832 generic.go:334] "Generic (PLEG): container finished" podID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" containerID="ed8497c8a30f8959beec1ddecf48fd264cf70e1ce42f54f6c86c674196ad9640" exitCode=0 Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.326589 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8b5344e-00d4-4ad7-9dd1-176828d155cc","Type":"ContainerDied","Data":"ed8497c8a30f8959beec1ddecf48fd264cf70e1ce42f54f6c86c674196ad9640"} Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.348694 4832 generic.go:334] "Generic (PLEG): container finished" podID="1bec9629-2cbf-4111-808b-aa67cb8bd060" containerID="9c7c27e8ae6b8ed7f16c36b45d924544698a7a79f1e585161b792d90f032c2d3" exitCode=0 Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.348797 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqxrh" event={"ID":"1bec9629-2cbf-4111-808b-aa67cb8bd060","Type":"ContainerDied","Data":"9c7c27e8ae6b8ed7f16c36b45d924544698a7a79f1e585161b792d90f032c2d3"} Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.348829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqxrh" event={"ID":"1bec9629-2cbf-4111-808b-aa67cb8bd060","Type":"ContainerStarted","Data":"ff82a851b3a2cee8e33448a5d822ed6a259856d4e288587ee30ab8644859a4c9"} Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.387703 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8f93334a-ea76-42f6-9f67-0788fac06f14","Type":"ContainerStarted","Data":"87468fc24f53856909ab7b499eda7ba5c4ad3c478d7edf1f7a24a4a9354cf994"} Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.396001 4832 generic.go:334] "Generic (PLEG): container finished" podID="57a27d75-be0a-495c-bbef-8cafcd375848" containerID="e1ed127739b9f8fdf1d8b73c2c4c091aee8ab786b209204be6961bccf923fc83" exitCode=0 Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.396068 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"57a27d75-be0a-495c-bbef-8cafcd375848","Type":"ContainerDied","Data":"e1ed127739b9f8fdf1d8b73c2c4c091aee8ab786b209204be6961bccf923fc83"} Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.434082 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.434065458 podStartE2EDuration="2.434065458s" podCreationTimestamp="2025-10-02 18:45:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:45:39.416584966 +0000 UTC m=+1496.386027838" watchObservedRunningTime="2025-10-02 18:45:39.434065458 +0000 UTC m=+1496.403508330" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.474211 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.503751 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-public-tls-certs\") pod \"57a27d75-be0a-495c-bbef-8cafcd375848\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.503892 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-config-data\") pod \"57a27d75-be0a-495c-bbef-8cafcd375848\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.503927 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-combined-ca-bundle\") pod \"57a27d75-be0a-495c-bbef-8cafcd375848\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.503979 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a27d75-be0a-495c-bbef-8cafcd375848-logs\") pod \"57a27d75-be0a-495c-bbef-8cafcd375848\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.504008 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-internal-tls-certs\") pod \"57a27d75-be0a-495c-bbef-8cafcd375848\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.504126 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glnr2\" (UniqueName: \"kubernetes.io/projected/57a27d75-be0a-495c-bbef-8cafcd375848-kube-api-access-glnr2\") pod \"57a27d75-be0a-495c-bbef-8cafcd375848\" (UID: \"57a27d75-be0a-495c-bbef-8cafcd375848\") " Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.521215 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a27d75-be0a-495c-bbef-8cafcd375848-logs" (OuterVolumeSpecName: "logs") pod "57a27d75-be0a-495c-bbef-8cafcd375848" (UID: "57a27d75-be0a-495c-bbef-8cafcd375848"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.524834 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a27d75-be0a-495c-bbef-8cafcd375848-kube-api-access-glnr2" (OuterVolumeSpecName: "kube-api-access-glnr2") pod "57a27d75-be0a-495c-bbef-8cafcd375848" (UID: "57a27d75-be0a-495c-bbef-8cafcd375848"). InnerVolumeSpecName "kube-api-access-glnr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.574199 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-config-data" (OuterVolumeSpecName: "config-data") pod "57a27d75-be0a-495c-bbef-8cafcd375848" (UID: "57a27d75-be0a-495c-bbef-8cafcd375848"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.585115 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57a27d75-be0a-495c-bbef-8cafcd375848" (UID: "57a27d75-be0a-495c-bbef-8cafcd375848"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.592709 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "57a27d75-be0a-495c-bbef-8cafcd375848" (UID: "57a27d75-be0a-495c-bbef-8cafcd375848"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.593664 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "57a27d75-be0a-495c-bbef-8cafcd375848" (UID: "57a27d75-be0a-495c-bbef-8cafcd375848"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.608242 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glnr2\" (UniqueName: \"kubernetes.io/projected/57a27d75-be0a-495c-bbef-8cafcd375848-kube-api-access-glnr2\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.608337 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.608350 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.608359 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.608367 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57a27d75-be0a-495c-bbef-8cafcd375848-logs\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.608375 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57a27d75-be0a-495c-bbef-8cafcd375848-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.645319 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.711651 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k56b\" (UniqueName: \"kubernetes.io/projected/e8b5344e-00d4-4ad7-9dd1-176828d155cc-kube-api-access-7k56b\") pod \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.711801 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-nova-metadata-tls-certs\") pod \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.711900 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b5344e-00d4-4ad7-9dd1-176828d155cc-logs\") pod \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.712011 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-config-data\") pod \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.712205 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-combined-ca-bundle\") pod \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\" (UID: \"e8b5344e-00d4-4ad7-9dd1-176828d155cc\") " Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.712683 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8b5344e-00d4-4ad7-9dd1-176828d155cc-logs" (OuterVolumeSpecName: "logs") pod "e8b5344e-00d4-4ad7-9dd1-176828d155cc" (UID: "e8b5344e-00d4-4ad7-9dd1-176828d155cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.713295 4832 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8b5344e-00d4-4ad7-9dd1-176828d155cc-logs\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.715607 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b5344e-00d4-4ad7-9dd1-176828d155cc-kube-api-access-7k56b" (OuterVolumeSpecName: "kube-api-access-7k56b") pod "e8b5344e-00d4-4ad7-9dd1-176828d155cc" (UID: "e8b5344e-00d4-4ad7-9dd1-176828d155cc"). InnerVolumeSpecName "kube-api-access-7k56b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.745082 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8b5344e-00d4-4ad7-9dd1-176828d155cc" (UID: "e8b5344e-00d4-4ad7-9dd1-176828d155cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.778427 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-config-data" (OuterVolumeSpecName: "config-data") pod "e8b5344e-00d4-4ad7-9dd1-176828d155cc" (UID: "e8b5344e-00d4-4ad7-9dd1-176828d155cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.798157 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e8b5344e-00d4-4ad7-9dd1-176828d155cc" (UID: "e8b5344e-00d4-4ad7-9dd1-176828d155cc"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.815693 4832 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.815723 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.815733 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8b5344e-00d4-4ad7-9dd1-176828d155cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:39 crc kubenswrapper[4832]: I1002 18:45:39.815742 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k56b\" (UniqueName: \"kubernetes.io/projected/e8b5344e-00d4-4ad7-9dd1-176828d155cc-kube-api-access-7k56b\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.411809 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqxrh" event={"ID":"1bec9629-2cbf-4111-808b-aa67cb8bd060","Type":"ContainerStarted","Data":"697d9da9035420a1d6cdf80510d63b685747024e1da1dbf1fdc1bd4f58d07962"} Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.415087 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.415158 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"57a27d75-be0a-495c-bbef-8cafcd375848","Type":"ContainerDied","Data":"3d6c40641702fbc200089cc3f4353cc05fb2fff0eb5f4933da55566d58a7206c"} Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.415240 4832 scope.go:117] "RemoveContainer" containerID="e1ed127739b9f8fdf1d8b73c2c4c091aee8ab786b209204be6961bccf923fc83" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.430820 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.430900 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e8b5344e-00d4-4ad7-9dd1-176828d155cc","Type":"ContainerDied","Data":"eaf5b3cdf08853abc2244a8d5074b90eb93962614ec8d019a9cd727b12ef5fa8"} Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.469564 4832 scope.go:117] "RemoveContainer" containerID="c8f6ff065b7394d3c781c1bff19505fe5b942954153a64e8ab221881a4b7b22d" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.493105 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.506173 4832 scope.go:117] "RemoveContainer" containerID="ed8497c8a30f8959beec1ddecf48fd264cf70e1ce42f54f6c86c674196ad9640" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.512750 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.533225 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.544749 4832 scope.go:117] "RemoveContainer" containerID="588d8197cd3f09be3d74fdd818dfb51c766379cbee324af3407a034d9f96f36c" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.545143 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.569474 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:45:40 crc kubenswrapper[4832]: E1002 18:45:40.570080 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a27d75-be0a-495c-bbef-8cafcd375848" containerName="nova-api-log" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.570108 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a27d75-be0a-495c-bbef-8cafcd375848" containerName="nova-api-log" Oct 02 18:45:40 crc kubenswrapper[4832]: E1002 18:45:40.570138 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" containerName="nova-metadata-log" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.570148 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" containerName="nova-metadata-log" Oct 02 18:45:40 crc kubenswrapper[4832]: E1002 18:45:40.570173 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" containerName="nova-metadata-metadata" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.570183 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" containerName="nova-metadata-metadata" Oct 02 18:45:40 crc kubenswrapper[4832]: E1002 18:45:40.570202 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a27d75-be0a-495c-bbef-8cafcd375848" containerName="nova-api-api" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.570210 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a27d75-be0a-495c-bbef-8cafcd375848" containerName="nova-api-api" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.570447 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" containerName="nova-metadata-metadata" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.570469 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a27d75-be0a-495c-bbef-8cafcd375848" containerName="nova-api-api" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.570487 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a27d75-be0a-495c-bbef-8cafcd375848" containerName="nova-api-log" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.570503 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" containerName="nova-metadata-log" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.571842 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.575567 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.575646 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.583004 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.617328 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.619390 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.621624 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.621846 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.624205 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.636599 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-config-data\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.636635 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.636703 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtkcf\" (UniqueName: \"kubernetes.io/projected/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-kube-api-access-mtkcf\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.636770 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-logs\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.636822 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.636920 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.739064 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.739158 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzvbp\" (UniqueName: \"kubernetes.io/projected/02307835-a3c7-4dc6-add1-8c9a6daab69d-kube-api-access-vzvbp\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.739220 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-config-data\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.739292 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.739404 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtkcf\" (UniqueName: \"kubernetes.io/projected/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-kube-api-access-mtkcf\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.739443 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02307835-a3c7-4dc6-add1-8c9a6daab69d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.739520 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02307835-a3c7-4dc6-add1-8c9a6daab69d-public-tls-certs\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.739550 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-logs\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.739573 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02307835-a3c7-4dc6-add1-8c9a6daab69d-logs\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.739606 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02307835-a3c7-4dc6-add1-8c9a6daab69d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.739660 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02307835-a3c7-4dc6-add1-8c9a6daab69d-config-data\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.740365 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-logs\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.743449 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.744135 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-config-data\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.750634 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.766226 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtkcf\" (UniqueName: \"kubernetes.io/projected/ca63490c-e0ae-4fc3-89cc-f20f8810c98c-kube-api-access-mtkcf\") pod \"nova-metadata-0\" (UID: \"ca63490c-e0ae-4fc3-89cc-f20f8810c98c\") " pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.841618 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02307835-a3c7-4dc6-add1-8c9a6daab69d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.841698 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02307835-a3c7-4dc6-add1-8c9a6daab69d-public-tls-certs\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.841726 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02307835-a3c7-4dc6-add1-8c9a6daab69d-logs\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.841748 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02307835-a3c7-4dc6-add1-8c9a6daab69d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.841791 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02307835-a3c7-4dc6-add1-8c9a6daab69d-config-data\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.841841 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzvbp\" (UniqueName: \"kubernetes.io/projected/02307835-a3c7-4dc6-add1-8c9a6daab69d-kube-api-access-vzvbp\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.842182 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02307835-a3c7-4dc6-add1-8c9a6daab69d-logs\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.845511 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02307835-a3c7-4dc6-add1-8c9a6daab69d-public-tls-certs\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.845855 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02307835-a3c7-4dc6-add1-8c9a6daab69d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.846939 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02307835-a3c7-4dc6-add1-8c9a6daab69d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.847743 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02307835-a3c7-4dc6-add1-8c9a6daab69d-config-data\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.859925 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzvbp\" (UniqueName: \"kubernetes.io/projected/02307835-a3c7-4dc6-add1-8c9a6daab69d-kube-api-access-vzvbp\") pod \"nova-api-0\" (UID: \"02307835-a3c7-4dc6-add1-8c9a6daab69d\") " pod="openstack/nova-api-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.910310 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 02 18:45:40 crc kubenswrapper[4832]: I1002 18:45:40.937344 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 02 18:45:41 crc kubenswrapper[4832]: I1002 18:45:41.238792 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a27d75-be0a-495c-bbef-8cafcd375848" path="/var/lib/kubelet/pods/57a27d75-be0a-495c-bbef-8cafcd375848/volumes" Oct 02 18:45:41 crc kubenswrapper[4832]: I1002 18:45:41.239918 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8b5344e-00d4-4ad7-9dd1-176828d155cc" path="/var/lib/kubelet/pods/e8b5344e-00d4-4ad7-9dd1-176828d155cc/volumes" Oct 02 18:45:41 crc kubenswrapper[4832]: I1002 18:45:41.446714 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 02 18:45:41 crc kubenswrapper[4832]: I1002 18:45:41.461688 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 02 18:45:42 crc kubenswrapper[4832]: I1002 18:45:42.489485 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02307835-a3c7-4dc6-add1-8c9a6daab69d","Type":"ContainerStarted","Data":"6bc36bc675ac6184f96094415a0618f011a1ea7f550c4993e506201ba9fdf1b9"} Oct 02 18:45:42 crc kubenswrapper[4832]: I1002 18:45:42.489745 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02307835-a3c7-4dc6-add1-8c9a6daab69d","Type":"ContainerStarted","Data":"e2aa774a2b464b1f792c45af92e11174a080e3a96620907b425ebe9989f2cd96"} Oct 02 18:45:42 crc kubenswrapper[4832]: I1002 18:45:42.489756 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02307835-a3c7-4dc6-add1-8c9a6daab69d","Type":"ContainerStarted","Data":"8586ef8ee0e737622ec31a0cbe7fd369bb15cb221c259fbd9c50d51c0791ec81"} Oct 02 18:45:42 crc kubenswrapper[4832]: I1002 18:45:42.500489 4832 generic.go:334] "Generic (PLEG): container finished" podID="1bec9629-2cbf-4111-808b-aa67cb8bd060" containerID="697d9da9035420a1d6cdf80510d63b685747024e1da1dbf1fdc1bd4f58d07962" exitCode=0 Oct 02 18:45:42 crc kubenswrapper[4832]: I1002 18:45:42.500572 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqxrh" event={"ID":"1bec9629-2cbf-4111-808b-aa67cb8bd060","Type":"ContainerDied","Data":"697d9da9035420a1d6cdf80510d63b685747024e1da1dbf1fdc1bd4f58d07962"} Oct 02 18:45:42 crc kubenswrapper[4832]: I1002 18:45:42.522500 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ca63490c-e0ae-4fc3-89cc-f20f8810c98c","Type":"ContainerStarted","Data":"1d27f5d1e10b50907d36ace9618c04ff8abde6c5578f8fd9b97c203d5bc95850"} Oct 02 18:45:42 crc kubenswrapper[4832]: I1002 18:45:42.522725 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ca63490c-e0ae-4fc3-89cc-f20f8810c98c","Type":"ContainerStarted","Data":"b6656888bdde71c770d3cebc92754d0d67d5f74774fe5284117eb2c4ac6b4be5"} Oct 02 18:45:42 crc kubenswrapper[4832]: I1002 18:45:42.522734 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ca63490c-e0ae-4fc3-89cc-f20f8810c98c","Type":"ContainerStarted","Data":"5881466e340bae137599c841524e31dd46791904aab37c57557525ad2204cfb1"} Oct 02 18:45:42 crc kubenswrapper[4832]: I1002 18:45:42.537953 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.537930836 podStartE2EDuration="2.537930836s" podCreationTimestamp="2025-10-02 18:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:45:42.516277585 +0000 UTC m=+1499.485720457" watchObservedRunningTime="2025-10-02 18:45:42.537930836 +0000 UTC m=+1499.507373698" Oct 02 18:45:42 crc kubenswrapper[4832]: I1002 18:45:42.606298 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.606279799 podStartE2EDuration="2.606279799s" podCreationTimestamp="2025-10-02 18:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:45:42.561648464 +0000 UTC m=+1499.531091336" watchObservedRunningTime="2025-10-02 18:45:42.606279799 +0000 UTC m=+1499.575722661" Oct 02 18:45:42 crc kubenswrapper[4832]: I1002 18:45:42.667955 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 02 18:45:43 crc kubenswrapper[4832]: I1002 18:45:43.538352 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqxrh" event={"ID":"1bec9629-2cbf-4111-808b-aa67cb8bd060","Type":"ContainerStarted","Data":"488a49086b7dcae7fe1dfe31d9123728855c90bc762e22d8bb1952d97b35cf48"} Oct 02 18:45:43 crc kubenswrapper[4832]: I1002 18:45:43.571037 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sqxrh" podStartSLOduration=2.92595018 podStartE2EDuration="6.571015272s" podCreationTimestamp="2025-10-02 18:45:37 +0000 UTC" firstStartedPulling="2025-10-02 18:45:39.364487098 +0000 UTC m=+1496.333929980" lastFinishedPulling="2025-10-02 18:45:43.0095522 +0000 UTC m=+1499.978995072" observedRunningTime="2025-10-02 18:45:43.555307915 +0000 UTC m=+1500.524750807" watchObservedRunningTime="2025-10-02 18:45:43.571015272 +0000 UTC m=+1500.540458154" Oct 02 18:45:44 crc kubenswrapper[4832]: E1002 18:45:44.999232 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55c9826_fe7e_4a17_800c_6e45446af3a2.slice\": RecentStats: unable to find data in memory cache]" Oct 02 18:45:45 crc kubenswrapper[4832]: I1002 18:45:45.910512 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 02 18:45:45 crc kubenswrapper[4832]: I1002 18:45:45.910894 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.064193 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tvqjj"] Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.072935 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.082438 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tvqjj"] Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.169960 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55589366-b8da-4dc5-a096-947a02752427-utilities\") pod \"certified-operators-tvqjj\" (UID: \"55589366-b8da-4dc5-a096-947a02752427\") " pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.170220 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55589366-b8da-4dc5-a096-947a02752427-catalog-content\") pod \"certified-operators-tvqjj\" (UID: \"55589366-b8da-4dc5-a096-947a02752427\") " pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.171087 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b7gr\" (UniqueName: \"kubernetes.io/projected/55589366-b8da-4dc5-a096-947a02752427-kube-api-access-9b7gr\") pod \"certified-operators-tvqjj\" (UID: \"55589366-b8da-4dc5-a096-947a02752427\") " pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.273968 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b7gr\" (UniqueName: \"kubernetes.io/projected/55589366-b8da-4dc5-a096-947a02752427-kube-api-access-9b7gr\") pod \"certified-operators-tvqjj\" (UID: \"55589366-b8da-4dc5-a096-947a02752427\") " pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.274324 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55589366-b8da-4dc5-a096-947a02752427-utilities\") pod \"certified-operators-tvqjj\" (UID: \"55589366-b8da-4dc5-a096-947a02752427\") " pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.274517 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55589366-b8da-4dc5-a096-947a02752427-catalog-content\") pod \"certified-operators-tvqjj\" (UID: \"55589366-b8da-4dc5-a096-947a02752427\") " pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.274865 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55589366-b8da-4dc5-a096-947a02752427-utilities\") pod \"certified-operators-tvqjj\" (UID: \"55589366-b8da-4dc5-a096-947a02752427\") " pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.275190 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55589366-b8da-4dc5-a096-947a02752427-catalog-content\") pod \"certified-operators-tvqjj\" (UID: \"55589366-b8da-4dc5-a096-947a02752427\") " pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.296647 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b7gr\" (UniqueName: \"kubernetes.io/projected/55589366-b8da-4dc5-a096-947a02752427-kube-api-access-9b7gr\") pod \"certified-operators-tvqjj\" (UID: \"55589366-b8da-4dc5-a096-947a02752427\") " pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.333873 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8c2sx" podUID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerName="registry-server" probeResult="failure" output=< Oct 02 18:45:46 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 18:45:46 crc kubenswrapper[4832]: > Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.402802 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:45:46 crc kubenswrapper[4832]: W1002 18:45:46.952915 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55589366_b8da_4dc5_a096_947a02752427.slice/crio-a93cb5ca4355da7110e783169caef866168fc74ee64c46b80d8cb405b0ddbc6b WatchSource:0}: Error finding container a93cb5ca4355da7110e783169caef866168fc74ee64c46b80d8cb405b0ddbc6b: Status 404 returned error can't find the container with id a93cb5ca4355da7110e783169caef866168fc74ee64c46b80d8cb405b0ddbc6b Oct 02 18:45:46 crc kubenswrapper[4832]: I1002 18:45:46.964094 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tvqjj"] Oct 02 18:45:47 crc kubenswrapper[4832]: I1002 18:45:47.590194 4832 generic.go:334] "Generic (PLEG): container finished" podID="55589366-b8da-4dc5-a096-947a02752427" containerID="9e4dec53542722ff1b3026b78dd26c17457ab8559638e7400027d4cd81fc620c" exitCode=0 Oct 02 18:45:47 crc kubenswrapper[4832]: I1002 18:45:47.590342 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvqjj" event={"ID":"55589366-b8da-4dc5-a096-947a02752427","Type":"ContainerDied","Data":"9e4dec53542722ff1b3026b78dd26c17457ab8559638e7400027d4cd81fc620c"} Oct 02 18:45:47 crc kubenswrapper[4832]: I1002 18:45:47.590588 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvqjj" event={"ID":"55589366-b8da-4dc5-a096-947a02752427","Type":"ContainerStarted","Data":"a93cb5ca4355da7110e783169caef866168fc74ee64c46b80d8cb405b0ddbc6b"} Oct 02 18:45:47 crc kubenswrapper[4832]: I1002 18:45:47.667975 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 02 18:45:47 crc kubenswrapper[4832]: I1002 18:45:47.702254 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 02 18:45:48 crc kubenswrapper[4832]: I1002 18:45:48.019818 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:48 crc kubenswrapper[4832]: I1002 18:45:48.019911 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:48 crc kubenswrapper[4832]: I1002 18:45:48.105238 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:48 crc kubenswrapper[4832]: I1002 18:45:48.606373 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvqjj" event={"ID":"55589366-b8da-4dc5-a096-947a02752427","Type":"ContainerStarted","Data":"a808ccd00ed7e2384b43ef4fa1729b7940f3afc6ab9d3be265a2d0fe1f64c806"} Oct 02 18:45:48 crc kubenswrapper[4832]: I1002 18:45:48.665907 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 02 18:45:48 crc kubenswrapper[4832]: I1002 18:45:48.673567 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:50 crc kubenswrapper[4832]: I1002 18:45:50.462369 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sqxrh"] Oct 02 18:45:50 crc kubenswrapper[4832]: I1002 18:45:50.911114 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 02 18:45:50 crc kubenswrapper[4832]: I1002 18:45:50.911400 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 02 18:45:50 crc kubenswrapper[4832]: I1002 18:45:50.937947 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 02 18:45:50 crc kubenswrapper[4832]: I1002 18:45:50.938007 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 02 18:45:51 crc kubenswrapper[4832]: I1002 18:45:51.649965 4832 generic.go:334] "Generic (PLEG): container finished" podID="55589366-b8da-4dc5-a096-947a02752427" containerID="a808ccd00ed7e2384b43ef4fa1729b7940f3afc6ab9d3be265a2d0fe1f64c806" exitCode=0 Oct 02 18:45:51 crc kubenswrapper[4832]: I1002 18:45:51.650298 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sqxrh" podUID="1bec9629-2cbf-4111-808b-aa67cb8bd060" containerName="registry-server" containerID="cri-o://488a49086b7dcae7fe1dfe31d9123728855c90bc762e22d8bb1952d97b35cf48" gracePeriod=2 Oct 02 18:45:51 crc kubenswrapper[4832]: I1002 18:45:51.650746 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvqjj" event={"ID":"55589366-b8da-4dc5-a096-947a02752427","Type":"ContainerDied","Data":"a808ccd00ed7e2384b43ef4fa1729b7940f3afc6ab9d3be265a2d0fe1f64c806"} Oct 02 18:45:51 crc kubenswrapper[4832]: I1002 18:45:51.924640 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ca63490c-e0ae-4fc3-89cc-f20f8810c98c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.1:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 02 18:45:51 crc kubenswrapper[4832]: I1002 18:45:51.924706 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ca63490c-e0ae-4fc3-89cc-f20f8810c98c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.1:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 02 18:45:51 crc kubenswrapper[4832]: I1002 18:45:51.956370 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="02307835-a3c7-4dc6-add1-8c9a6daab69d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.2:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 02 18:45:51 crc kubenswrapper[4832]: I1002 18:45:51.956415 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="02307835-a3c7-4dc6-add1-8c9a6daab69d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.2:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.296636 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.433224 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bec9629-2cbf-4111-808b-aa67cb8bd060-catalog-content\") pod \"1bec9629-2cbf-4111-808b-aa67cb8bd060\" (UID: \"1bec9629-2cbf-4111-808b-aa67cb8bd060\") " Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.433383 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z2d7\" (UniqueName: \"kubernetes.io/projected/1bec9629-2cbf-4111-808b-aa67cb8bd060-kube-api-access-4z2d7\") pod \"1bec9629-2cbf-4111-808b-aa67cb8bd060\" (UID: \"1bec9629-2cbf-4111-808b-aa67cb8bd060\") " Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.433731 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bec9629-2cbf-4111-808b-aa67cb8bd060-utilities\") pod \"1bec9629-2cbf-4111-808b-aa67cb8bd060\" (UID: \"1bec9629-2cbf-4111-808b-aa67cb8bd060\") " Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.435036 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bec9629-2cbf-4111-808b-aa67cb8bd060-utilities" (OuterVolumeSpecName: "utilities") pod "1bec9629-2cbf-4111-808b-aa67cb8bd060" (UID: "1bec9629-2cbf-4111-808b-aa67cb8bd060"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.443458 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bec9629-2cbf-4111-808b-aa67cb8bd060-kube-api-access-4z2d7" (OuterVolumeSpecName: "kube-api-access-4z2d7") pod "1bec9629-2cbf-4111-808b-aa67cb8bd060" (UID: "1bec9629-2cbf-4111-808b-aa67cb8bd060"). InnerVolumeSpecName "kube-api-access-4z2d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.497207 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bec9629-2cbf-4111-808b-aa67cb8bd060-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bec9629-2cbf-4111-808b-aa67cb8bd060" (UID: "1bec9629-2cbf-4111-808b-aa67cb8bd060"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.536483 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z2d7\" (UniqueName: \"kubernetes.io/projected/1bec9629-2cbf-4111-808b-aa67cb8bd060-kube-api-access-4z2d7\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.536526 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bec9629-2cbf-4111-808b-aa67cb8bd060-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.536537 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bec9629-2cbf-4111-808b-aa67cb8bd060-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.661687 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvqjj" event={"ID":"55589366-b8da-4dc5-a096-947a02752427","Type":"ContainerStarted","Data":"38ff34a22832f84363473a49e5e48a0d2f13c8aa446505d26b9dca6004c051dd"} Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.664600 4832 generic.go:334] "Generic (PLEG): container finished" podID="1bec9629-2cbf-4111-808b-aa67cb8bd060" containerID="488a49086b7dcae7fe1dfe31d9123728855c90bc762e22d8bb1952d97b35cf48" exitCode=0 Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.664646 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqxrh" event={"ID":"1bec9629-2cbf-4111-808b-aa67cb8bd060","Type":"ContainerDied","Data":"488a49086b7dcae7fe1dfe31d9123728855c90bc762e22d8bb1952d97b35cf48"} Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.664678 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqxrh" event={"ID":"1bec9629-2cbf-4111-808b-aa67cb8bd060","Type":"ContainerDied","Data":"ff82a851b3a2cee8e33448a5d822ed6a259856d4e288587ee30ab8644859a4c9"} Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.664697 4832 scope.go:117] "RemoveContainer" containerID="488a49086b7dcae7fe1dfe31d9123728855c90bc762e22d8bb1952d97b35cf48" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.664928 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqxrh" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.693660 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tvqjj" podStartSLOduration=2.167783683 podStartE2EDuration="6.693645712s" podCreationTimestamp="2025-10-02 18:45:46 +0000 UTC" firstStartedPulling="2025-10-02 18:45:47.592543153 +0000 UTC m=+1504.561986035" lastFinishedPulling="2025-10-02 18:45:52.118405192 +0000 UTC m=+1509.087848064" observedRunningTime="2025-10-02 18:45:52.68648774 +0000 UTC m=+1509.655930622" watchObservedRunningTime="2025-10-02 18:45:52.693645712 +0000 UTC m=+1509.663088584" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.699140 4832 scope.go:117] "RemoveContainer" containerID="697d9da9035420a1d6cdf80510d63b685747024e1da1dbf1fdc1bd4f58d07962" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.718091 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sqxrh"] Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.729305 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sqxrh"] Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.756174 4832 scope.go:117] "RemoveContainer" containerID="9c7c27e8ae6b8ed7f16c36b45d924544698a7a79f1e585161b792d90f032c2d3" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.813969 4832 scope.go:117] "RemoveContainer" containerID="488a49086b7dcae7fe1dfe31d9123728855c90bc762e22d8bb1952d97b35cf48" Oct 02 18:45:52 crc kubenswrapper[4832]: E1002 18:45:52.814677 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"488a49086b7dcae7fe1dfe31d9123728855c90bc762e22d8bb1952d97b35cf48\": container with ID starting with 488a49086b7dcae7fe1dfe31d9123728855c90bc762e22d8bb1952d97b35cf48 not found: ID does not exist" containerID="488a49086b7dcae7fe1dfe31d9123728855c90bc762e22d8bb1952d97b35cf48" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.814767 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"488a49086b7dcae7fe1dfe31d9123728855c90bc762e22d8bb1952d97b35cf48"} err="failed to get container status \"488a49086b7dcae7fe1dfe31d9123728855c90bc762e22d8bb1952d97b35cf48\": rpc error: code = NotFound desc = could not find container \"488a49086b7dcae7fe1dfe31d9123728855c90bc762e22d8bb1952d97b35cf48\": container with ID starting with 488a49086b7dcae7fe1dfe31d9123728855c90bc762e22d8bb1952d97b35cf48 not found: ID does not exist" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.814847 4832 scope.go:117] "RemoveContainer" containerID="697d9da9035420a1d6cdf80510d63b685747024e1da1dbf1fdc1bd4f58d07962" Oct 02 18:45:52 crc kubenswrapper[4832]: E1002 18:45:52.815173 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"697d9da9035420a1d6cdf80510d63b685747024e1da1dbf1fdc1bd4f58d07962\": container with ID starting with 697d9da9035420a1d6cdf80510d63b685747024e1da1dbf1fdc1bd4f58d07962 not found: ID does not exist" containerID="697d9da9035420a1d6cdf80510d63b685747024e1da1dbf1fdc1bd4f58d07962" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.815248 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697d9da9035420a1d6cdf80510d63b685747024e1da1dbf1fdc1bd4f58d07962"} err="failed to get container status \"697d9da9035420a1d6cdf80510d63b685747024e1da1dbf1fdc1bd4f58d07962\": rpc error: code = NotFound desc = could not find container \"697d9da9035420a1d6cdf80510d63b685747024e1da1dbf1fdc1bd4f58d07962\": container with ID starting with 697d9da9035420a1d6cdf80510d63b685747024e1da1dbf1fdc1bd4f58d07962 not found: ID does not exist" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.815338 4832 scope.go:117] "RemoveContainer" containerID="9c7c27e8ae6b8ed7f16c36b45d924544698a7a79f1e585161b792d90f032c2d3" Oct 02 18:45:52 crc kubenswrapper[4832]: E1002 18:45:52.815654 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c7c27e8ae6b8ed7f16c36b45d924544698a7a79f1e585161b792d90f032c2d3\": container with ID starting with 9c7c27e8ae6b8ed7f16c36b45d924544698a7a79f1e585161b792d90f032c2d3 not found: ID does not exist" containerID="9c7c27e8ae6b8ed7f16c36b45d924544698a7a79f1e585161b792d90f032c2d3" Oct 02 18:45:52 crc kubenswrapper[4832]: I1002 18:45:52.815677 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c7c27e8ae6b8ed7f16c36b45d924544698a7a79f1e585161b792d90f032c2d3"} err="failed to get container status \"9c7c27e8ae6b8ed7f16c36b45d924544698a7a79f1e585161b792d90f032c2d3\": rpc error: code = NotFound desc = could not find container \"9c7c27e8ae6b8ed7f16c36b45d924544698a7a79f1e585161b792d90f032c2d3\": container with ID starting with 9c7c27e8ae6b8ed7f16c36b45d924544698a7a79f1e585161b792d90f032c2d3 not found: ID does not exist" Oct 02 18:45:53 crc kubenswrapper[4832]: I1002 18:45:53.233629 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bec9629-2cbf-4111-808b-aa67cb8bd060" path="/var/lib/kubelet/pods/1bec9629-2cbf-4111-808b-aa67cb8bd060/volumes" Oct 02 18:45:53 crc kubenswrapper[4832]: I1002 18:45:53.476886 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 02 18:45:56 crc kubenswrapper[4832]: I1002 18:45:56.329114 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8c2sx" podUID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerName="registry-server" probeResult="failure" output=< Oct 02 18:45:56 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 18:45:56 crc kubenswrapper[4832]: > Oct 02 18:45:56 crc kubenswrapper[4832]: I1002 18:45:56.403343 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:45:56 crc kubenswrapper[4832]: I1002 18:45:56.403586 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:45:57 crc kubenswrapper[4832]: I1002 18:45:57.462370 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tvqjj" podUID="55589366-b8da-4dc5-a096-947a02752427" containerName="registry-server" probeResult="failure" output=< Oct 02 18:45:57 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 18:45:57 crc kubenswrapper[4832]: > Oct 02 18:45:58 crc kubenswrapper[4832]: I1002 18:45:58.226780 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 18:45:58 crc kubenswrapper[4832]: I1002 18:45:58.227186 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1eb62c48-8808-44e9-8fbc-781e0d252f01" containerName="kube-state-metrics" containerID="cri-o://414911ca9e1a1d92fcc7716770a0ac8e7d74081d10394f6ede3d2691b0cf872e" gracePeriod=30 Oct 02 18:45:58 crc kubenswrapper[4832]: I1002 18:45:58.355570 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Oct 02 18:45:58 crc kubenswrapper[4832]: I1002 18:45:58.355762 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="04b1e748-a409-4bf1-b790-7517f2dfdfe4" containerName="mysqld-exporter" containerID="cri-o://722b6a31d27e7ef42f4934201e6976d164497cde5b6b5a2619e5997134b96d36" gracePeriod=30 Oct 02 18:45:58 crc kubenswrapper[4832]: I1002 18:45:58.759611 4832 generic.go:334] "Generic (PLEG): container finished" podID="1eb62c48-8808-44e9-8fbc-781e0d252f01" containerID="414911ca9e1a1d92fcc7716770a0ac8e7d74081d10394f6ede3d2691b0cf872e" exitCode=2 Oct 02 18:45:58 crc kubenswrapper[4832]: I1002 18:45:58.759716 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1eb62c48-8808-44e9-8fbc-781e0d252f01","Type":"ContainerDied","Data":"414911ca9e1a1d92fcc7716770a0ac8e7d74081d10394f6ede3d2691b0cf872e"} Oct 02 18:45:58 crc kubenswrapper[4832]: I1002 18:45:58.762237 4832 generic.go:334] "Generic (PLEG): container finished" podID="04b1e748-a409-4bf1-b790-7517f2dfdfe4" containerID="722b6a31d27e7ef42f4934201e6976d164497cde5b6b5a2619e5997134b96d36" exitCode=2 Oct 02 18:45:58 crc kubenswrapper[4832]: I1002 18:45:58.762301 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"04b1e748-a409-4bf1-b790-7517f2dfdfe4","Type":"ContainerDied","Data":"722b6a31d27e7ef42f4934201e6976d164497cde5b6b5a2619e5997134b96d36"} Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.072820 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.130232 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrcmm\" (UniqueName: \"kubernetes.io/projected/1eb62c48-8808-44e9-8fbc-781e0d252f01-kube-api-access-lrcmm\") pod \"1eb62c48-8808-44e9-8fbc-781e0d252f01\" (UID: \"1eb62c48-8808-44e9-8fbc-781e0d252f01\") " Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.134821 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.135517 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb62c48-8808-44e9-8fbc-781e0d252f01-kube-api-access-lrcmm" (OuterVolumeSpecName: "kube-api-access-lrcmm") pod "1eb62c48-8808-44e9-8fbc-781e0d252f01" (UID: "1eb62c48-8808-44e9-8fbc-781e0d252f01"). InnerVolumeSpecName "kube-api-access-lrcmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.232078 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b1e748-a409-4bf1-b790-7517f2dfdfe4-config-data\") pod \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\" (UID: \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\") " Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.232163 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5c5c\" (UniqueName: \"kubernetes.io/projected/04b1e748-a409-4bf1-b790-7517f2dfdfe4-kube-api-access-r5c5c\") pod \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\" (UID: \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\") " Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.232247 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b1e748-a409-4bf1-b790-7517f2dfdfe4-combined-ca-bundle\") pod \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\" (UID: \"04b1e748-a409-4bf1-b790-7517f2dfdfe4\") " Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.233050 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrcmm\" (UniqueName: \"kubernetes.io/projected/1eb62c48-8808-44e9-8fbc-781e0d252f01-kube-api-access-lrcmm\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.234970 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b1e748-a409-4bf1-b790-7517f2dfdfe4-kube-api-access-r5c5c" (OuterVolumeSpecName: "kube-api-access-r5c5c") pod "04b1e748-a409-4bf1-b790-7517f2dfdfe4" (UID: "04b1e748-a409-4bf1-b790-7517f2dfdfe4"). InnerVolumeSpecName "kube-api-access-r5c5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.293409 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b1e748-a409-4bf1-b790-7517f2dfdfe4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04b1e748-a409-4bf1-b790-7517f2dfdfe4" (UID: "04b1e748-a409-4bf1-b790-7517f2dfdfe4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.334608 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5c5c\" (UniqueName: \"kubernetes.io/projected/04b1e748-a409-4bf1-b790-7517f2dfdfe4-kube-api-access-r5c5c\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.334633 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b1e748-a409-4bf1-b790-7517f2dfdfe4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.335476 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b1e748-a409-4bf1-b790-7517f2dfdfe4-config-data" (OuterVolumeSpecName: "config-data") pod "04b1e748-a409-4bf1-b790-7517f2dfdfe4" (UID: "04b1e748-a409-4bf1-b790-7517f2dfdfe4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.436503 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b1e748-a409-4bf1-b790-7517f2dfdfe4-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.772190 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"04b1e748-a409-4bf1-b790-7517f2dfdfe4","Type":"ContainerDied","Data":"cb5e0f44139fc0de1b4d35959c797091c34ffc163a8cb3ee8782832d3197cfcb"} Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.772253 4832 scope.go:117] "RemoveContainer" containerID="722b6a31d27e7ef42f4934201e6976d164497cde5b6b5a2619e5997134b96d36" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.772504 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.773970 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1eb62c48-8808-44e9-8fbc-781e0d252f01","Type":"ContainerDied","Data":"a30a0fcdfa245af32363de0801496ac2b747ab4108c1498729d6fb759a6799b4"} Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.774050 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.810768 4832 scope.go:117] "RemoveContainer" containerID="414911ca9e1a1d92fcc7716770a0ac8e7d74081d10394f6ede3d2691b0cf872e" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.819002 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.833345 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.857191 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.873220 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.889069 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 18:45:59 crc kubenswrapper[4832]: E1002 18:45:59.889664 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb62c48-8808-44e9-8fbc-781e0d252f01" containerName="kube-state-metrics" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.889677 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb62c48-8808-44e9-8fbc-781e0d252f01" containerName="kube-state-metrics" Oct 02 18:45:59 crc kubenswrapper[4832]: E1002 18:45:59.889702 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bec9629-2cbf-4111-808b-aa67cb8bd060" containerName="extract-utilities" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.889708 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bec9629-2cbf-4111-808b-aa67cb8bd060" containerName="extract-utilities" Oct 02 18:45:59 crc kubenswrapper[4832]: E1002 18:45:59.889722 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bec9629-2cbf-4111-808b-aa67cb8bd060" containerName="registry-server" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.889729 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bec9629-2cbf-4111-808b-aa67cb8bd060" containerName="registry-server" Oct 02 18:45:59 crc kubenswrapper[4832]: E1002 18:45:59.889775 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b1e748-a409-4bf1-b790-7517f2dfdfe4" containerName="mysqld-exporter" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.889781 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b1e748-a409-4bf1-b790-7517f2dfdfe4" containerName="mysqld-exporter" Oct 02 18:45:59 crc kubenswrapper[4832]: E1002 18:45:59.889799 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bec9629-2cbf-4111-808b-aa67cb8bd060" containerName="extract-content" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.889804 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bec9629-2cbf-4111-808b-aa67cb8bd060" containerName="extract-content" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.890019 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bec9629-2cbf-4111-808b-aa67cb8bd060" containerName="registry-server" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.890032 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb62c48-8808-44e9-8fbc-781e0d252f01" containerName="kube-state-metrics" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.890049 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b1e748-a409-4bf1-b790-7517f2dfdfe4" containerName="mysqld-exporter" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.890870 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.893746 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.895624 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.899204 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.922228 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.924931 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.927361 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.927820 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.940782 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.949029 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ecd10228-8f4c-46ea-946d-838bc37b46cc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ecd10228-8f4c-46ea-946d-838bc37b46cc\") " pod="openstack/kube-state-metrics-0" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.949078 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq8p7\" (UniqueName: \"kubernetes.io/projected/ecd10228-8f4c-46ea-946d-838bc37b46cc-kube-api-access-pq8p7\") pod \"kube-state-metrics-0\" (UID: \"ecd10228-8f4c-46ea-946d-838bc37b46cc\") " pod="openstack/kube-state-metrics-0" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.949356 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd10228-8f4c-46ea-946d-838bc37b46cc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ecd10228-8f4c-46ea-946d-838bc37b46cc\") " pod="openstack/kube-state-metrics-0" Oct 02 18:45:59 crc kubenswrapper[4832]: I1002 18:45:59.949604 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd10228-8f4c-46ea-946d-838bc37b46cc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ecd10228-8f4c-46ea-946d-838bc37b46cc\") " pod="openstack/kube-state-metrics-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.052058 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd10228-8f4c-46ea-946d-838bc37b46cc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ecd10228-8f4c-46ea-946d-838bc37b46cc\") " pod="openstack/kube-state-metrics-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.052140 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9nfv\" (UniqueName: \"kubernetes.io/projected/6888060d-2a19-41ee-ac4d-06a28c11a0f6-kube-api-access-z9nfv\") pod \"mysqld-exporter-0\" (UID: \"6888060d-2a19-41ee-ac4d-06a28c11a0f6\") " pod="openstack/mysqld-exporter-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.052210 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd10228-8f4c-46ea-946d-838bc37b46cc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ecd10228-8f4c-46ea-946d-838bc37b46cc\") " pod="openstack/kube-state-metrics-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.052243 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6888060d-2a19-41ee-ac4d-06a28c11a0f6-config-data\") pod \"mysqld-exporter-0\" (UID: \"6888060d-2a19-41ee-ac4d-06a28c11a0f6\") " pod="openstack/mysqld-exporter-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.052435 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6888060d-2a19-41ee-ac4d-06a28c11a0f6-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6888060d-2a19-41ee-ac4d-06a28c11a0f6\") " pod="openstack/mysqld-exporter-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.052454 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6888060d-2a19-41ee-ac4d-06a28c11a0f6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6888060d-2a19-41ee-ac4d-06a28c11a0f6\") " pod="openstack/mysqld-exporter-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.052573 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ecd10228-8f4c-46ea-946d-838bc37b46cc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ecd10228-8f4c-46ea-946d-838bc37b46cc\") " pod="openstack/kube-state-metrics-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.052604 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq8p7\" (UniqueName: \"kubernetes.io/projected/ecd10228-8f4c-46ea-946d-838bc37b46cc-kube-api-access-pq8p7\") pod \"kube-state-metrics-0\" (UID: \"ecd10228-8f4c-46ea-946d-838bc37b46cc\") " pod="openstack/kube-state-metrics-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.061387 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecd10228-8f4c-46ea-946d-838bc37b46cc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ecd10228-8f4c-46ea-946d-838bc37b46cc\") " pod="openstack/kube-state-metrics-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.062518 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecd10228-8f4c-46ea-946d-838bc37b46cc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ecd10228-8f4c-46ea-946d-838bc37b46cc\") " pod="openstack/kube-state-metrics-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.066980 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ecd10228-8f4c-46ea-946d-838bc37b46cc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ecd10228-8f4c-46ea-946d-838bc37b46cc\") " pod="openstack/kube-state-metrics-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.069169 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq8p7\" (UniqueName: \"kubernetes.io/projected/ecd10228-8f4c-46ea-946d-838bc37b46cc-kube-api-access-pq8p7\") pod \"kube-state-metrics-0\" (UID: \"ecd10228-8f4c-46ea-946d-838bc37b46cc\") " pod="openstack/kube-state-metrics-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.155110 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6888060d-2a19-41ee-ac4d-06a28c11a0f6-config-data\") pod \"mysqld-exporter-0\" (UID: \"6888060d-2a19-41ee-ac4d-06a28c11a0f6\") " pod="openstack/mysqld-exporter-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.155178 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6888060d-2a19-41ee-ac4d-06a28c11a0f6-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6888060d-2a19-41ee-ac4d-06a28c11a0f6\") " pod="openstack/mysqld-exporter-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.155200 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6888060d-2a19-41ee-ac4d-06a28c11a0f6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6888060d-2a19-41ee-ac4d-06a28c11a0f6\") " pod="openstack/mysqld-exporter-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.155351 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9nfv\" (UniqueName: \"kubernetes.io/projected/6888060d-2a19-41ee-ac4d-06a28c11a0f6-kube-api-access-z9nfv\") pod \"mysqld-exporter-0\" (UID: \"6888060d-2a19-41ee-ac4d-06a28c11a0f6\") " pod="openstack/mysqld-exporter-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.158481 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6888060d-2a19-41ee-ac4d-06a28c11a0f6-config-data\") pod \"mysqld-exporter-0\" (UID: \"6888060d-2a19-41ee-ac4d-06a28c11a0f6\") " pod="openstack/mysqld-exporter-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.158544 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6888060d-2a19-41ee-ac4d-06a28c11a0f6-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6888060d-2a19-41ee-ac4d-06a28c11a0f6\") " pod="openstack/mysqld-exporter-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.158833 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6888060d-2a19-41ee-ac4d-06a28c11a0f6-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6888060d-2a19-41ee-ac4d-06a28c11a0f6\") " pod="openstack/mysqld-exporter-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.172104 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9nfv\" (UniqueName: \"kubernetes.io/projected/6888060d-2a19-41ee-ac4d-06a28c11a0f6-kube-api-access-z9nfv\") pod \"mysqld-exporter-0\" (UID: \"6888060d-2a19-41ee-ac4d-06a28c11a0f6\") " pod="openstack/mysqld-exporter-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.211219 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.248980 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.758944 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 02 18:46:00 crc kubenswrapper[4832]: W1002 18:46:00.762587 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecd10228_8f4c_46ea_946d_838bc37b46cc.slice/crio-5fc2423a64a1d35cfd1273539f38e0d0cfeb3a8bbe3ce03abd15a6ff20404b1d WatchSource:0}: Error finding container 5fc2423a64a1d35cfd1273539f38e0d0cfeb3a8bbe3ce03abd15a6ff20404b1d: Status 404 returned error can't find the container with id 5fc2423a64a1d35cfd1273539f38e0d0cfeb3a8bbe3ce03abd15a6ff20404b1d Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.788049 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ecd10228-8f4c-46ea-946d-838bc37b46cc","Type":"ContainerStarted","Data":"5fc2423a64a1d35cfd1273539f38e0d0cfeb3a8bbe3ce03abd15a6ff20404b1d"} Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.861656 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.861988 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="ceilometer-central-agent" containerID="cri-o://925b46e346050b61415af1f1ba183371cc67b485b883b9b62d27c4b4b76faab6" gracePeriod=30 Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.862205 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="ceilometer-notification-agent" containerID="cri-o://4157d2c29db901daefa845012da4fd9b9231646864999299b5fd725c083ba873" gracePeriod=30 Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.862210 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="proxy-httpd" containerID="cri-o://32fa0b6461d4fbf2a74a93583c3ceda1bd4fc08cf81e40cabd8f3a40e550edef" gracePeriod=30 Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.862600 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="sg-core" containerID="cri-o://ba5bede63a0d789d49fd8a929b9db3badbb4fab57c3367f0fa42870090d4e13e" gracePeriod=30 Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.917130 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.917450 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.924358 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.935561 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.935612 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.950381 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.950898 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.967851 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 02 18:46:00 crc kubenswrapper[4832]: I1002 18:46:00.997927 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 02 18:46:01 crc kubenswrapper[4832]: I1002 18:46:01.234683 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b1e748-a409-4bf1-b790-7517f2dfdfe4" path="/var/lib/kubelet/pods/04b1e748-a409-4bf1-b790-7517f2dfdfe4/volumes" Oct 02 18:46:01 crc kubenswrapper[4832]: I1002 18:46:01.235348 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb62c48-8808-44e9-8fbc-781e0d252f01" path="/var/lib/kubelet/pods/1eb62c48-8808-44e9-8fbc-781e0d252f01/volumes" Oct 02 18:46:01 crc kubenswrapper[4832]: I1002 18:46:01.808458 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"6888060d-2a19-41ee-ac4d-06a28c11a0f6","Type":"ContainerStarted","Data":"f34c9fff280a875806d53d928a17e0e7820e3fa30eb1dac0b81331a124938221"} Oct 02 18:46:01 crc kubenswrapper[4832]: I1002 18:46:01.811713 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerID="32fa0b6461d4fbf2a74a93583c3ceda1bd4fc08cf81e40cabd8f3a40e550edef" exitCode=0 Oct 02 18:46:01 crc kubenswrapper[4832]: I1002 18:46:01.811731 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerID="ba5bede63a0d789d49fd8a929b9db3badbb4fab57c3367f0fa42870090d4e13e" exitCode=2 Oct 02 18:46:01 crc kubenswrapper[4832]: I1002 18:46:01.811739 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerID="925b46e346050b61415af1f1ba183371cc67b485b883b9b62d27c4b4b76faab6" exitCode=0 Oct 02 18:46:01 crc kubenswrapper[4832]: I1002 18:46:01.811767 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1fca7071-4799-4d4d-b132-4bea35f0aa6c","Type":"ContainerDied","Data":"32fa0b6461d4fbf2a74a93583c3ceda1bd4fc08cf81e40cabd8f3a40e550edef"} Oct 02 18:46:01 crc kubenswrapper[4832]: I1002 18:46:01.811782 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1fca7071-4799-4d4d-b132-4bea35f0aa6c","Type":"ContainerDied","Data":"ba5bede63a0d789d49fd8a929b9db3badbb4fab57c3367f0fa42870090d4e13e"} Oct 02 18:46:01 crc kubenswrapper[4832]: I1002 18:46:01.811791 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1fca7071-4799-4d4d-b132-4bea35f0aa6c","Type":"ContainerDied","Data":"925b46e346050b61415af1f1ba183371cc67b485b883b9b62d27c4b4b76faab6"} Oct 02 18:46:01 crc kubenswrapper[4832]: I1002 18:46:01.814611 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ecd10228-8f4c-46ea-946d-838bc37b46cc","Type":"ContainerStarted","Data":"ee35e3bb9a45d745c3082a43f9926ad72e579a6e4d648a0fcc9f074e435a176f"} Oct 02 18:46:01 crc kubenswrapper[4832]: I1002 18:46:01.814747 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 02 18:46:01 crc kubenswrapper[4832]: I1002 18:46:01.814770 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 02 18:46:01 crc kubenswrapper[4832]: I1002 18:46:01.834980 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 02 18:46:01 crc kubenswrapper[4832]: I1002 18:46:01.849334 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.386641913 podStartE2EDuration="2.849306968s" podCreationTimestamp="2025-10-02 18:45:59 +0000 UTC" firstStartedPulling="2025-10-02 18:46:00.764375442 +0000 UTC m=+1517.733818314" lastFinishedPulling="2025-10-02 18:46:01.227040497 +0000 UTC m=+1518.196483369" observedRunningTime="2025-10-02 18:46:01.844495658 +0000 UTC m=+1518.813938730" watchObservedRunningTime="2025-10-02 18:46:01.849306968 +0000 UTC m=+1518.818749840" Oct 02 18:46:02 crc kubenswrapper[4832]: I1002 18:46:02.832206 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"6888060d-2a19-41ee-ac4d-06a28c11a0f6","Type":"ContainerStarted","Data":"7790add9c12a21585eb934584dfec5c84758c7e913b7db336d291b10ba9354ea"} Oct 02 18:46:02 crc kubenswrapper[4832]: I1002 18:46:02.850575 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.265926523 podStartE2EDuration="3.850559565s" podCreationTimestamp="2025-10-02 18:45:59 +0000 UTC" firstStartedPulling="2025-10-02 18:46:00.93080635 +0000 UTC m=+1517.900249222" lastFinishedPulling="2025-10-02 18:46:01.515439392 +0000 UTC m=+1518.484882264" observedRunningTime="2025-10-02 18:46:02.846605822 +0000 UTC m=+1519.816048684" watchObservedRunningTime="2025-10-02 18:46:02.850559565 +0000 UTC m=+1519.820002437" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.426338 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.527761 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzjs7\" (UniqueName: \"kubernetes.io/projected/1fca7071-4799-4d4d-b132-4bea35f0aa6c-kube-api-access-rzjs7\") pod \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.528089 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1fca7071-4799-4d4d-b132-4bea35f0aa6c-run-httpd\") pod \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.528124 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-combined-ca-bundle\") pod \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.528157 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-config-data\") pod \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.528249 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-sg-core-conf-yaml\") pod \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.528302 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1fca7071-4799-4d4d-b132-4bea35f0aa6c-log-httpd\") pod \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.528336 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-scripts\") pod \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\" (UID: \"1fca7071-4799-4d4d-b132-4bea35f0aa6c\") " Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.528554 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fca7071-4799-4d4d-b132-4bea35f0aa6c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1fca7071-4799-4d4d-b132-4bea35f0aa6c" (UID: "1fca7071-4799-4d4d-b132-4bea35f0aa6c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.529276 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fca7071-4799-4d4d-b132-4bea35f0aa6c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1fca7071-4799-4d4d-b132-4bea35f0aa6c" (UID: "1fca7071-4799-4d4d-b132-4bea35f0aa6c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.529363 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1fca7071-4799-4d4d-b132-4bea35f0aa6c-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.533718 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-scripts" (OuterVolumeSpecName: "scripts") pod "1fca7071-4799-4d4d-b132-4bea35f0aa6c" (UID: "1fca7071-4799-4d4d-b132-4bea35f0aa6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.534401 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fca7071-4799-4d4d-b132-4bea35f0aa6c-kube-api-access-rzjs7" (OuterVolumeSpecName: "kube-api-access-rzjs7") pod "1fca7071-4799-4d4d-b132-4bea35f0aa6c" (UID: "1fca7071-4799-4d4d-b132-4bea35f0aa6c"). InnerVolumeSpecName "kube-api-access-rzjs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.569907 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1fca7071-4799-4d4d-b132-4bea35f0aa6c" (UID: "1fca7071-4799-4d4d-b132-4bea35f0aa6c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.623890 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fca7071-4799-4d4d-b132-4bea35f0aa6c" (UID: "1fca7071-4799-4d4d-b132-4bea35f0aa6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.632093 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzjs7\" (UniqueName: \"kubernetes.io/projected/1fca7071-4799-4d4d-b132-4bea35f0aa6c-kube-api-access-rzjs7\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.632129 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.632138 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.632147 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1fca7071-4799-4d4d-b132-4bea35f0aa6c-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.632155 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.680380 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-config-data" (OuterVolumeSpecName: "config-data") pod "1fca7071-4799-4d4d-b132-4bea35f0aa6c" (UID: "1fca7071-4799-4d4d-b132-4bea35f0aa6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.734048 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fca7071-4799-4d4d-b132-4bea35f0aa6c-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.880938 4832 generic.go:334] "Generic (PLEG): container finished" podID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerID="4157d2c29db901daefa845012da4fd9b9231646864999299b5fd725c083ba873" exitCode=0 Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.881019 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1fca7071-4799-4d4d-b132-4bea35f0aa6c","Type":"ContainerDied","Data":"4157d2c29db901daefa845012da4fd9b9231646864999299b5fd725c083ba873"} Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.881051 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1fca7071-4799-4d4d-b132-4bea35f0aa6c","Type":"ContainerDied","Data":"1b471d942df67222e9ca467bb47a8511b1c2fad9e95ed84d8a863fa0c3922689"} Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.881057 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.881078 4832 scope.go:117] "RemoveContainer" containerID="32fa0b6461d4fbf2a74a93583c3ceda1bd4fc08cf81e40cabd8f3a40e550edef" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.946333 4832 scope.go:117] "RemoveContainer" containerID="ba5bede63a0d789d49fd8a929b9db3badbb4fab57c3367f0fa42870090d4e13e" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.958848 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.976738 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.986489 4832 scope.go:117] "RemoveContainer" containerID="4157d2c29db901daefa845012da4fd9b9231646864999299b5fd725c083ba873" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.994978 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:46:05 crc kubenswrapper[4832]: E1002 18:46:05.995519 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="ceilometer-notification-agent" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.995535 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="ceilometer-notification-agent" Oct 02 18:46:05 crc kubenswrapper[4832]: E1002 18:46:05.995553 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="ceilometer-central-agent" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.995559 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="ceilometer-central-agent" Oct 02 18:46:05 crc kubenswrapper[4832]: E1002 18:46:05.995612 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="sg-core" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.995619 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="sg-core" Oct 02 18:46:05 crc kubenswrapper[4832]: E1002 18:46:05.995633 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="proxy-httpd" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.995647 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="proxy-httpd" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.995854 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="ceilometer-central-agent" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.995875 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="ceilometer-notification-agent" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.995889 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="sg-core" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.995902 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" containerName="proxy-httpd" Oct 02 18:46:05 crc kubenswrapper[4832]: I1002 18:46:05.998017 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.001189 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.001361 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.009615 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.011711 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.012768 4832 scope.go:117] "RemoveContainer" containerID="925b46e346050b61415af1f1ba183371cc67b485b883b9b62d27c4b4b76faab6" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.040961 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.041048 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.041088 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-scripts\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.041161 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-config-data\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.041183 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.041207 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhx77\" (UniqueName: \"kubernetes.io/projected/da9d5b68-94f7-43cd-8fb0-37aabb0449de-kube-api-access-jhx77\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.041294 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da9d5b68-94f7-43cd-8fb0-37aabb0449de-log-httpd\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.041386 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da9d5b68-94f7-43cd-8fb0-37aabb0449de-run-httpd\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.070872 4832 scope.go:117] "RemoveContainer" containerID="32fa0b6461d4fbf2a74a93583c3ceda1bd4fc08cf81e40cabd8f3a40e550edef" Oct 02 18:46:06 crc kubenswrapper[4832]: E1002 18:46:06.073476 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32fa0b6461d4fbf2a74a93583c3ceda1bd4fc08cf81e40cabd8f3a40e550edef\": container with ID starting with 32fa0b6461d4fbf2a74a93583c3ceda1bd4fc08cf81e40cabd8f3a40e550edef not found: ID does not exist" containerID="32fa0b6461d4fbf2a74a93583c3ceda1bd4fc08cf81e40cabd8f3a40e550edef" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.073525 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32fa0b6461d4fbf2a74a93583c3ceda1bd4fc08cf81e40cabd8f3a40e550edef"} err="failed to get container status \"32fa0b6461d4fbf2a74a93583c3ceda1bd4fc08cf81e40cabd8f3a40e550edef\": rpc error: code = NotFound desc = could not find container \"32fa0b6461d4fbf2a74a93583c3ceda1bd4fc08cf81e40cabd8f3a40e550edef\": container with ID starting with 32fa0b6461d4fbf2a74a93583c3ceda1bd4fc08cf81e40cabd8f3a40e550edef not found: ID does not exist" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.073560 4832 scope.go:117] "RemoveContainer" containerID="ba5bede63a0d789d49fd8a929b9db3badbb4fab57c3367f0fa42870090d4e13e" Oct 02 18:46:06 crc kubenswrapper[4832]: E1002 18:46:06.076085 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba5bede63a0d789d49fd8a929b9db3badbb4fab57c3367f0fa42870090d4e13e\": container with ID starting with ba5bede63a0d789d49fd8a929b9db3badbb4fab57c3367f0fa42870090d4e13e not found: ID does not exist" containerID="ba5bede63a0d789d49fd8a929b9db3badbb4fab57c3367f0fa42870090d4e13e" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.076109 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba5bede63a0d789d49fd8a929b9db3badbb4fab57c3367f0fa42870090d4e13e"} err="failed to get container status \"ba5bede63a0d789d49fd8a929b9db3badbb4fab57c3367f0fa42870090d4e13e\": rpc error: code = NotFound desc = could not find container \"ba5bede63a0d789d49fd8a929b9db3badbb4fab57c3367f0fa42870090d4e13e\": container with ID starting with ba5bede63a0d789d49fd8a929b9db3badbb4fab57c3367f0fa42870090d4e13e not found: ID does not exist" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.076123 4832 scope.go:117] "RemoveContainer" containerID="4157d2c29db901daefa845012da4fd9b9231646864999299b5fd725c083ba873" Oct 02 18:46:06 crc kubenswrapper[4832]: E1002 18:46:06.076907 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4157d2c29db901daefa845012da4fd9b9231646864999299b5fd725c083ba873\": container with ID starting with 4157d2c29db901daefa845012da4fd9b9231646864999299b5fd725c083ba873 not found: ID does not exist" containerID="4157d2c29db901daefa845012da4fd9b9231646864999299b5fd725c083ba873" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.076937 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4157d2c29db901daefa845012da4fd9b9231646864999299b5fd725c083ba873"} err="failed to get container status \"4157d2c29db901daefa845012da4fd9b9231646864999299b5fd725c083ba873\": rpc error: code = NotFound desc = could not find container \"4157d2c29db901daefa845012da4fd9b9231646864999299b5fd725c083ba873\": container with ID starting with 4157d2c29db901daefa845012da4fd9b9231646864999299b5fd725c083ba873 not found: ID does not exist" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.076951 4832 scope.go:117] "RemoveContainer" containerID="925b46e346050b61415af1f1ba183371cc67b485b883b9b62d27c4b4b76faab6" Oct 02 18:46:06 crc kubenswrapper[4832]: E1002 18:46:06.077225 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"925b46e346050b61415af1f1ba183371cc67b485b883b9b62d27c4b4b76faab6\": container with ID starting with 925b46e346050b61415af1f1ba183371cc67b485b883b9b62d27c4b4b76faab6 not found: ID does not exist" containerID="925b46e346050b61415af1f1ba183371cc67b485b883b9b62d27c4b4b76faab6" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.077249 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"925b46e346050b61415af1f1ba183371cc67b485b883b9b62d27c4b4b76faab6"} err="failed to get container status \"925b46e346050b61415af1f1ba183371cc67b485b883b9b62d27c4b4b76faab6\": rpc error: code = NotFound desc = could not find container \"925b46e346050b61415af1f1ba183371cc67b485b883b9b62d27c4b4b76faab6\": container with ID starting with 925b46e346050b61415af1f1ba183371cc67b485b883b9b62d27c4b4b76faab6 not found: ID does not exist" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.143220 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.143591 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.144197 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-scripts\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.144277 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-config-data\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.144297 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.144329 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhx77\" (UniqueName: \"kubernetes.io/projected/da9d5b68-94f7-43cd-8fb0-37aabb0449de-kube-api-access-jhx77\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.144369 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da9d5b68-94f7-43cd-8fb0-37aabb0449de-log-httpd\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.144430 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da9d5b68-94f7-43cd-8fb0-37aabb0449de-run-httpd\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.144873 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da9d5b68-94f7-43cd-8fb0-37aabb0449de-run-httpd\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.145061 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da9d5b68-94f7-43cd-8fb0-37aabb0449de-log-httpd\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.150712 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.150756 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.150795 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-scripts\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.152850 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-config-data\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.161028 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.163140 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhx77\" (UniqueName: \"kubernetes.io/projected/da9d5b68-94f7-43cd-8fb0-37aabb0449de-kube-api-access-jhx77\") pod \"ceilometer-0\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.314882 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8c2sx" podUID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerName="registry-server" probeResult="failure" output=< Oct 02 18:46:06 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 18:46:06 crc kubenswrapper[4832]: > Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.333882 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.500530 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.713286 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:46:06 crc kubenswrapper[4832]: I1002 18:46:06.836749 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tvqjj"] Oct 02 18:46:07 crc kubenswrapper[4832]: I1002 18:46:07.071543 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:46:07 crc kubenswrapper[4832]: I1002 18:46:07.239065 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fca7071-4799-4d4d-b132-4bea35f0aa6c" path="/var/lib/kubelet/pods/1fca7071-4799-4d4d-b132-4bea35f0aa6c/volumes" Oct 02 18:46:07 crc kubenswrapper[4832]: I1002 18:46:07.917779 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tvqjj" podUID="55589366-b8da-4dc5-a096-947a02752427" containerName="registry-server" containerID="cri-o://38ff34a22832f84363473a49e5e48a0d2f13c8aa446505d26b9dca6004c051dd" gracePeriod=2 Oct 02 18:46:07 crc kubenswrapper[4832]: I1002 18:46:07.918363 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da9d5b68-94f7-43cd-8fb0-37aabb0449de","Type":"ContainerStarted","Data":"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5"} Oct 02 18:46:07 crc kubenswrapper[4832]: I1002 18:46:07.918392 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da9d5b68-94f7-43cd-8fb0-37aabb0449de","Type":"ContainerStarted","Data":"1a5a09df777cef7015573307652250ee1fc55960ecd9471dc583deab4d70e5c2"} Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.606371 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.716487 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55589366-b8da-4dc5-a096-947a02752427-utilities\") pod \"55589366-b8da-4dc5-a096-947a02752427\" (UID: \"55589366-b8da-4dc5-a096-947a02752427\") " Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.716664 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b7gr\" (UniqueName: \"kubernetes.io/projected/55589366-b8da-4dc5-a096-947a02752427-kube-api-access-9b7gr\") pod \"55589366-b8da-4dc5-a096-947a02752427\" (UID: \"55589366-b8da-4dc5-a096-947a02752427\") " Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.716849 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55589366-b8da-4dc5-a096-947a02752427-catalog-content\") pod \"55589366-b8da-4dc5-a096-947a02752427\" (UID: \"55589366-b8da-4dc5-a096-947a02752427\") " Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.718237 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55589366-b8da-4dc5-a096-947a02752427-utilities" (OuterVolumeSpecName: "utilities") pod "55589366-b8da-4dc5-a096-947a02752427" (UID: "55589366-b8da-4dc5-a096-947a02752427"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.732862 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55589366-b8da-4dc5-a096-947a02752427-kube-api-access-9b7gr" (OuterVolumeSpecName: "kube-api-access-9b7gr") pod "55589366-b8da-4dc5-a096-947a02752427" (UID: "55589366-b8da-4dc5-a096-947a02752427"). InnerVolumeSpecName "kube-api-access-9b7gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.767522 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55589366-b8da-4dc5-a096-947a02752427-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55589366-b8da-4dc5-a096-947a02752427" (UID: "55589366-b8da-4dc5-a096-947a02752427"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.821075 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55589366-b8da-4dc5-a096-947a02752427-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.821111 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55589366-b8da-4dc5-a096-947a02752427-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.821120 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b7gr\" (UniqueName: \"kubernetes.io/projected/55589366-b8da-4dc5-a096-947a02752427-kube-api-access-9b7gr\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.936926 4832 generic.go:334] "Generic (PLEG): container finished" podID="55589366-b8da-4dc5-a096-947a02752427" containerID="38ff34a22832f84363473a49e5e48a0d2f13c8aa446505d26b9dca6004c051dd" exitCode=0 Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.937008 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvqjj" event={"ID":"55589366-b8da-4dc5-a096-947a02752427","Type":"ContainerDied","Data":"38ff34a22832f84363473a49e5e48a0d2f13c8aa446505d26b9dca6004c051dd"} Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.937046 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvqjj" event={"ID":"55589366-b8da-4dc5-a096-947a02752427","Type":"ContainerDied","Data":"a93cb5ca4355da7110e783169caef866168fc74ee64c46b80d8cb405b0ddbc6b"} Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.937069 4832 scope.go:117] "RemoveContainer" containerID="38ff34a22832f84363473a49e5e48a0d2f13c8aa446505d26b9dca6004c051dd" Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.937303 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvqjj" Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.942254 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da9d5b68-94f7-43cd-8fb0-37aabb0449de","Type":"ContainerStarted","Data":"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d"} Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.961342 4832 scope.go:117] "RemoveContainer" containerID="a808ccd00ed7e2384b43ef4fa1729b7940f3afc6ab9d3be265a2d0fe1f64c806" Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.993483 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tvqjj"] Oct 02 18:46:08 crc kubenswrapper[4832]: I1002 18:46:08.998719 4832 scope.go:117] "RemoveContainer" containerID="9e4dec53542722ff1b3026b78dd26c17457ab8559638e7400027d4cd81fc620c" Oct 02 18:46:09 crc kubenswrapper[4832]: I1002 18:46:09.005138 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tvqjj"] Oct 02 18:46:09 crc kubenswrapper[4832]: I1002 18:46:09.032924 4832 scope.go:117] "RemoveContainer" containerID="38ff34a22832f84363473a49e5e48a0d2f13c8aa446505d26b9dca6004c051dd" Oct 02 18:46:09 crc kubenswrapper[4832]: E1002 18:46:09.033548 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38ff34a22832f84363473a49e5e48a0d2f13c8aa446505d26b9dca6004c051dd\": container with ID starting with 38ff34a22832f84363473a49e5e48a0d2f13c8aa446505d26b9dca6004c051dd not found: ID does not exist" containerID="38ff34a22832f84363473a49e5e48a0d2f13c8aa446505d26b9dca6004c051dd" Oct 02 18:46:09 crc kubenswrapper[4832]: I1002 18:46:09.033589 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38ff34a22832f84363473a49e5e48a0d2f13c8aa446505d26b9dca6004c051dd"} err="failed to get container status \"38ff34a22832f84363473a49e5e48a0d2f13c8aa446505d26b9dca6004c051dd\": rpc error: code = NotFound desc = could not find container \"38ff34a22832f84363473a49e5e48a0d2f13c8aa446505d26b9dca6004c051dd\": container with ID starting with 38ff34a22832f84363473a49e5e48a0d2f13c8aa446505d26b9dca6004c051dd not found: ID does not exist" Oct 02 18:46:09 crc kubenswrapper[4832]: I1002 18:46:09.033615 4832 scope.go:117] "RemoveContainer" containerID="a808ccd00ed7e2384b43ef4fa1729b7940f3afc6ab9d3be265a2d0fe1f64c806" Oct 02 18:46:09 crc kubenswrapper[4832]: E1002 18:46:09.035530 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a808ccd00ed7e2384b43ef4fa1729b7940f3afc6ab9d3be265a2d0fe1f64c806\": container with ID starting with a808ccd00ed7e2384b43ef4fa1729b7940f3afc6ab9d3be265a2d0fe1f64c806 not found: ID does not exist" containerID="a808ccd00ed7e2384b43ef4fa1729b7940f3afc6ab9d3be265a2d0fe1f64c806" Oct 02 18:46:09 crc kubenswrapper[4832]: I1002 18:46:09.035566 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a808ccd00ed7e2384b43ef4fa1729b7940f3afc6ab9d3be265a2d0fe1f64c806"} err="failed to get container status \"a808ccd00ed7e2384b43ef4fa1729b7940f3afc6ab9d3be265a2d0fe1f64c806\": rpc error: code = NotFound desc = could not find container \"a808ccd00ed7e2384b43ef4fa1729b7940f3afc6ab9d3be265a2d0fe1f64c806\": container with ID starting with a808ccd00ed7e2384b43ef4fa1729b7940f3afc6ab9d3be265a2d0fe1f64c806 not found: ID does not exist" Oct 02 18:46:09 crc kubenswrapper[4832]: I1002 18:46:09.035590 4832 scope.go:117] "RemoveContainer" containerID="9e4dec53542722ff1b3026b78dd26c17457ab8559638e7400027d4cd81fc620c" Oct 02 18:46:09 crc kubenswrapper[4832]: E1002 18:46:09.036506 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e4dec53542722ff1b3026b78dd26c17457ab8559638e7400027d4cd81fc620c\": container with ID starting with 9e4dec53542722ff1b3026b78dd26c17457ab8559638e7400027d4cd81fc620c not found: ID does not exist" containerID="9e4dec53542722ff1b3026b78dd26c17457ab8559638e7400027d4cd81fc620c" Oct 02 18:46:09 crc kubenswrapper[4832]: I1002 18:46:09.036535 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4dec53542722ff1b3026b78dd26c17457ab8559638e7400027d4cd81fc620c"} err="failed to get container status \"9e4dec53542722ff1b3026b78dd26c17457ab8559638e7400027d4cd81fc620c\": rpc error: code = NotFound desc = could not find container \"9e4dec53542722ff1b3026b78dd26c17457ab8559638e7400027d4cd81fc620c\": container with ID starting with 9e4dec53542722ff1b3026b78dd26c17457ab8559638e7400027d4cd81fc620c not found: ID does not exist" Oct 02 18:46:09 crc kubenswrapper[4832]: I1002 18:46:09.238318 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55589366-b8da-4dc5-a096-947a02752427" path="/var/lib/kubelet/pods/55589366-b8da-4dc5-a096-947a02752427/volumes" Oct 02 18:46:10 crc kubenswrapper[4832]: I1002 18:46:10.425540 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 02 18:46:11 crc kubenswrapper[4832]: I1002 18:46:11.004219 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da9d5b68-94f7-43cd-8fb0-37aabb0449de","Type":"ContainerStarted","Data":"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4"} Oct 02 18:46:13 crc kubenswrapper[4832]: I1002 18:46:13.036342 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da9d5b68-94f7-43cd-8fb0-37aabb0449de","Type":"ContainerStarted","Data":"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5"} Oct 02 18:46:13 crc kubenswrapper[4832]: I1002 18:46:13.036861 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 02 18:46:15 crc kubenswrapper[4832]: I1002 18:46:15.315459 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:46:15 crc kubenswrapper[4832]: I1002 18:46:15.354103 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.027717872 podStartE2EDuration="10.354081705s" podCreationTimestamp="2025-10-02 18:46:05 +0000 UTC" firstStartedPulling="2025-10-02 18:46:07.079813815 +0000 UTC m=+1524.049256697" lastFinishedPulling="2025-10-02 18:46:12.406177648 +0000 UTC m=+1529.375620530" observedRunningTime="2025-10-02 18:46:13.064211019 +0000 UTC m=+1530.033653891" watchObservedRunningTime="2025-10-02 18:46:15.354081705 +0000 UTC m=+1532.323524577" Oct 02 18:46:15 crc kubenswrapper[4832]: I1002 18:46:15.381475 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:46:16 crc kubenswrapper[4832]: I1002 18:46:16.085894 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8c2sx"] Oct 02 18:46:17 crc kubenswrapper[4832]: I1002 18:46:17.101686 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8c2sx" podUID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerName="registry-server" containerID="cri-o://0e41b8043dbd3e7dbca0c11f79cb0fdaa46220f7f22acda8c6bc595dd4ea4de2" gracePeriod=2 Oct 02 18:46:17 crc kubenswrapper[4832]: I1002 18:46:17.703895 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:46:17 crc kubenswrapper[4832]: I1002 18:46:17.859227 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b2c05ec-ad79-43c8-8b11-4406770b8875-catalog-content\") pod \"2b2c05ec-ad79-43c8-8b11-4406770b8875\" (UID: \"2b2c05ec-ad79-43c8-8b11-4406770b8875\") " Oct 02 18:46:17 crc kubenswrapper[4832]: I1002 18:46:17.860014 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b2c05ec-ad79-43c8-8b11-4406770b8875-utilities\") pod \"2b2c05ec-ad79-43c8-8b11-4406770b8875\" (UID: \"2b2c05ec-ad79-43c8-8b11-4406770b8875\") " Oct 02 18:46:17 crc kubenswrapper[4832]: I1002 18:46:17.860178 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cj44\" (UniqueName: \"kubernetes.io/projected/2b2c05ec-ad79-43c8-8b11-4406770b8875-kube-api-access-9cj44\") pod \"2b2c05ec-ad79-43c8-8b11-4406770b8875\" (UID: \"2b2c05ec-ad79-43c8-8b11-4406770b8875\") " Oct 02 18:46:17 crc kubenswrapper[4832]: I1002 18:46:17.860976 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b2c05ec-ad79-43c8-8b11-4406770b8875-utilities" (OuterVolumeSpecName: "utilities") pod "2b2c05ec-ad79-43c8-8b11-4406770b8875" (UID: "2b2c05ec-ad79-43c8-8b11-4406770b8875"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:46:17 crc kubenswrapper[4832]: I1002 18:46:17.876483 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2c05ec-ad79-43c8-8b11-4406770b8875-kube-api-access-9cj44" (OuterVolumeSpecName: "kube-api-access-9cj44") pod "2b2c05ec-ad79-43c8-8b11-4406770b8875" (UID: "2b2c05ec-ad79-43c8-8b11-4406770b8875"). InnerVolumeSpecName "kube-api-access-9cj44". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:46:17 crc kubenswrapper[4832]: I1002 18:46:17.947519 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b2c05ec-ad79-43c8-8b11-4406770b8875-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b2c05ec-ad79-43c8-8b11-4406770b8875" (UID: "2b2c05ec-ad79-43c8-8b11-4406770b8875"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:46:17 crc kubenswrapper[4832]: I1002 18:46:17.965336 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b2c05ec-ad79-43c8-8b11-4406770b8875-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:17 crc kubenswrapper[4832]: I1002 18:46:17.965388 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b2c05ec-ad79-43c8-8b11-4406770b8875-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:17 crc kubenswrapper[4832]: I1002 18:46:17.965440 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cj44\" (UniqueName: \"kubernetes.io/projected/2b2c05ec-ad79-43c8-8b11-4406770b8875-kube-api-access-9cj44\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.122963 4832 generic.go:334] "Generic (PLEG): container finished" podID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerID="0e41b8043dbd3e7dbca0c11f79cb0fdaa46220f7f22acda8c6bc595dd4ea4de2" exitCode=0 Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.123007 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c2sx" event={"ID":"2b2c05ec-ad79-43c8-8b11-4406770b8875","Type":"ContainerDied","Data":"0e41b8043dbd3e7dbca0c11f79cb0fdaa46220f7f22acda8c6bc595dd4ea4de2"} Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.123060 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8c2sx" Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.123952 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8c2sx" event={"ID":"2b2c05ec-ad79-43c8-8b11-4406770b8875","Type":"ContainerDied","Data":"01bbc07aa916d5da2c5fb3b1ffaa88fa2120de93bdcbbae4ed3e02f2976885c6"} Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.123968 4832 scope.go:117] "RemoveContainer" containerID="0e41b8043dbd3e7dbca0c11f79cb0fdaa46220f7f22acda8c6bc595dd4ea4de2" Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.170818 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8c2sx"] Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.170850 4832 scope.go:117] "RemoveContainer" containerID="605591607d90c730b86bbb92bddc817fb6766826c7ad9299f5dbf2a1470a035b" Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.185016 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8c2sx"] Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.209208 4832 scope.go:117] "RemoveContainer" containerID="2dbf4e9755a08959e70d8e66135fe735a6e901a3a1a2b06d34ad930f7f984797" Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.265997 4832 scope.go:117] "RemoveContainer" containerID="0e41b8043dbd3e7dbca0c11f79cb0fdaa46220f7f22acda8c6bc595dd4ea4de2" Oct 02 18:46:18 crc kubenswrapper[4832]: E1002 18:46:18.266652 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e41b8043dbd3e7dbca0c11f79cb0fdaa46220f7f22acda8c6bc595dd4ea4de2\": container with ID starting with 0e41b8043dbd3e7dbca0c11f79cb0fdaa46220f7f22acda8c6bc595dd4ea4de2 not found: ID does not exist" containerID="0e41b8043dbd3e7dbca0c11f79cb0fdaa46220f7f22acda8c6bc595dd4ea4de2" Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.266865 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e41b8043dbd3e7dbca0c11f79cb0fdaa46220f7f22acda8c6bc595dd4ea4de2"} err="failed to get container status \"0e41b8043dbd3e7dbca0c11f79cb0fdaa46220f7f22acda8c6bc595dd4ea4de2\": rpc error: code = NotFound desc = could not find container \"0e41b8043dbd3e7dbca0c11f79cb0fdaa46220f7f22acda8c6bc595dd4ea4de2\": container with ID starting with 0e41b8043dbd3e7dbca0c11f79cb0fdaa46220f7f22acda8c6bc595dd4ea4de2 not found: ID does not exist" Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.267078 4832 scope.go:117] "RemoveContainer" containerID="605591607d90c730b86bbb92bddc817fb6766826c7ad9299f5dbf2a1470a035b" Oct 02 18:46:18 crc kubenswrapper[4832]: E1002 18:46:18.267577 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"605591607d90c730b86bbb92bddc817fb6766826c7ad9299f5dbf2a1470a035b\": container with ID starting with 605591607d90c730b86bbb92bddc817fb6766826c7ad9299f5dbf2a1470a035b not found: ID does not exist" containerID="605591607d90c730b86bbb92bddc817fb6766826c7ad9299f5dbf2a1470a035b" Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.267608 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"605591607d90c730b86bbb92bddc817fb6766826c7ad9299f5dbf2a1470a035b"} err="failed to get container status \"605591607d90c730b86bbb92bddc817fb6766826c7ad9299f5dbf2a1470a035b\": rpc error: code = NotFound desc = could not find container \"605591607d90c730b86bbb92bddc817fb6766826c7ad9299f5dbf2a1470a035b\": container with ID starting with 605591607d90c730b86bbb92bddc817fb6766826c7ad9299f5dbf2a1470a035b not found: ID does not exist" Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.267631 4832 scope.go:117] "RemoveContainer" containerID="2dbf4e9755a08959e70d8e66135fe735a6e901a3a1a2b06d34ad930f7f984797" Oct 02 18:46:18 crc kubenswrapper[4832]: E1002 18:46:18.267855 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dbf4e9755a08959e70d8e66135fe735a6e901a3a1a2b06d34ad930f7f984797\": container with ID starting with 2dbf4e9755a08959e70d8e66135fe735a6e901a3a1a2b06d34ad930f7f984797 not found: ID does not exist" containerID="2dbf4e9755a08959e70d8e66135fe735a6e901a3a1a2b06d34ad930f7f984797" Oct 02 18:46:18 crc kubenswrapper[4832]: I1002 18:46:18.267873 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dbf4e9755a08959e70d8e66135fe735a6e901a3a1a2b06d34ad930f7f984797"} err="failed to get container status \"2dbf4e9755a08959e70d8e66135fe735a6e901a3a1a2b06d34ad930f7f984797\": rpc error: code = NotFound desc = could not find container \"2dbf4e9755a08959e70d8e66135fe735a6e901a3a1a2b06d34ad930f7f984797\": container with ID starting with 2dbf4e9755a08959e70d8e66135fe735a6e901a3a1a2b06d34ad930f7f984797 not found: ID does not exist" Oct 02 18:46:19 crc kubenswrapper[4832]: I1002 18:46:19.235550 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b2c05ec-ad79-43c8-8b11-4406770b8875" path="/var/lib/kubelet/pods/2b2c05ec-ad79-43c8-8b11-4406770b8875/volumes" Oct 02 18:46:36 crc kubenswrapper[4832]: I1002 18:46:36.343122 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.202484 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-dx58r"] Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.212719 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-dx58r"] Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.235541 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4" path="/var/lib/kubelet/pods/c3df4096-7c1e-4b8e-bdc8-23bfcbe6e7c4/volumes" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.357215 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-lsb6q"] Oct 02 18:46:49 crc kubenswrapper[4832]: E1002 18:46:49.358111 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerName="registry-server" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.358140 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerName="registry-server" Oct 02 18:46:49 crc kubenswrapper[4832]: E1002 18:46:49.358167 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55589366-b8da-4dc5-a096-947a02752427" containerName="registry-server" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.358175 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="55589366-b8da-4dc5-a096-947a02752427" containerName="registry-server" Oct 02 18:46:49 crc kubenswrapper[4832]: E1002 18:46:49.358196 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerName="extract-content" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.358204 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerName="extract-content" Oct 02 18:46:49 crc kubenswrapper[4832]: E1002 18:46:49.358232 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55589366-b8da-4dc5-a096-947a02752427" containerName="extract-utilities" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.358241 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="55589366-b8da-4dc5-a096-947a02752427" containerName="extract-utilities" Oct 02 18:46:49 crc kubenswrapper[4832]: E1002 18:46:49.358278 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55589366-b8da-4dc5-a096-947a02752427" containerName="extract-content" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.358287 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="55589366-b8da-4dc5-a096-947a02752427" containerName="extract-content" Oct 02 18:46:49 crc kubenswrapper[4832]: E1002 18:46:49.358311 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerName="extract-utilities" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.358318 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerName="extract-utilities" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.358608 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="55589366-b8da-4dc5-a096-947a02752427" containerName="registry-server" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.358658 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2c05ec-ad79-43c8-8b11-4406770b8875" containerName="registry-server" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.359716 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-lsb6q" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.386896 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-lsb6q"] Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.545164 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e2b7cf-ba0c-4217-93e4-503ce1e40755-config-data\") pod \"heat-db-sync-lsb6q\" (UID: \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\") " pod="openstack/heat-db-sync-lsb6q" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.545249 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25m4d\" (UniqueName: \"kubernetes.io/projected/36e2b7cf-ba0c-4217-93e4-503ce1e40755-kube-api-access-25m4d\") pod \"heat-db-sync-lsb6q\" (UID: \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\") " pod="openstack/heat-db-sync-lsb6q" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.545790 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e2b7cf-ba0c-4217-93e4-503ce1e40755-combined-ca-bundle\") pod \"heat-db-sync-lsb6q\" (UID: \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\") " pod="openstack/heat-db-sync-lsb6q" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.649164 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e2b7cf-ba0c-4217-93e4-503ce1e40755-config-data\") pod \"heat-db-sync-lsb6q\" (UID: \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\") " pod="openstack/heat-db-sync-lsb6q" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.649359 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25m4d\" (UniqueName: \"kubernetes.io/projected/36e2b7cf-ba0c-4217-93e4-503ce1e40755-kube-api-access-25m4d\") pod \"heat-db-sync-lsb6q\" (UID: \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\") " pod="openstack/heat-db-sync-lsb6q" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.649675 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e2b7cf-ba0c-4217-93e4-503ce1e40755-combined-ca-bundle\") pod \"heat-db-sync-lsb6q\" (UID: \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\") " pod="openstack/heat-db-sync-lsb6q" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.658131 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e2b7cf-ba0c-4217-93e4-503ce1e40755-config-data\") pod \"heat-db-sync-lsb6q\" (UID: \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\") " pod="openstack/heat-db-sync-lsb6q" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.670420 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e2b7cf-ba0c-4217-93e4-503ce1e40755-combined-ca-bundle\") pod \"heat-db-sync-lsb6q\" (UID: \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\") " pod="openstack/heat-db-sync-lsb6q" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.675896 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25m4d\" (UniqueName: \"kubernetes.io/projected/36e2b7cf-ba0c-4217-93e4-503ce1e40755-kube-api-access-25m4d\") pod \"heat-db-sync-lsb6q\" (UID: \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\") " pod="openstack/heat-db-sync-lsb6q" Oct 02 18:46:49 crc kubenswrapper[4832]: I1002 18:46:49.696895 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-lsb6q" Oct 02 18:46:50 crc kubenswrapper[4832]: I1002 18:46:50.219062 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-lsb6q"] Oct 02 18:46:50 crc kubenswrapper[4832]: I1002 18:46:50.230895 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 18:46:50 crc kubenswrapper[4832]: I1002 18:46:50.548090 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-lsb6q" event={"ID":"36e2b7cf-ba0c-4217-93e4-503ce1e40755","Type":"ContainerStarted","Data":"0a865f8a8942b545bdfdfae03660e56d1450a5e65955cabf3b9adc26c4aa69bf"} Oct 02 18:46:51 crc kubenswrapper[4832]: I1002 18:46:51.634339 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:46:51 crc kubenswrapper[4832]: I1002 18:46:51.634836 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="ceilometer-central-agent" containerID="cri-o://b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5" gracePeriod=30 Oct 02 18:46:51 crc kubenswrapper[4832]: I1002 18:46:51.634950 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="sg-core" containerID="cri-o://88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4" gracePeriod=30 Oct 02 18:46:51 crc kubenswrapper[4832]: I1002 18:46:51.634904 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="proxy-httpd" containerID="cri-o://1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5" gracePeriod=30 Oct 02 18:46:51 crc kubenswrapper[4832]: I1002 18:46:51.634985 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="ceilometer-notification-agent" containerID="cri-o://7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d" gracePeriod=30 Oct 02 18:46:52 crc kubenswrapper[4832]: E1002 18:46:52.070118 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda9d5b68_94f7_43cd_8fb0_37aabb0449de.slice/crio-conmon-1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5.scope\": RecentStats: unable to find data in memory cache]" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.107190 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.633491 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.649375 4832 generic.go:334] "Generic (PLEG): container finished" podID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerID="1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5" exitCode=0 Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.649419 4832 generic.go:334] "Generic (PLEG): container finished" podID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerID="88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4" exitCode=2 Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.649428 4832 generic.go:334] "Generic (PLEG): container finished" podID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerID="7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d" exitCode=0 Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.649438 4832 generic.go:334] "Generic (PLEG): container finished" podID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerID="b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5" exitCode=0 Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.649464 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da9d5b68-94f7-43cd-8fb0-37aabb0449de","Type":"ContainerDied","Data":"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5"} Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.649498 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da9d5b68-94f7-43cd-8fb0-37aabb0449de","Type":"ContainerDied","Data":"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4"} Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.649512 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da9d5b68-94f7-43cd-8fb0-37aabb0449de","Type":"ContainerDied","Data":"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d"} Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.649523 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da9d5b68-94f7-43cd-8fb0-37aabb0449de","Type":"ContainerDied","Data":"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5"} Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.649537 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"da9d5b68-94f7-43cd-8fb0-37aabb0449de","Type":"ContainerDied","Data":"1a5a09df777cef7015573307652250ee1fc55960ecd9471dc583deab4d70e5c2"} Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.649556 4832 scope.go:117] "RemoveContainer" containerID="1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.712231 4832 scope.go:117] "RemoveContainer" containerID="88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.719018 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.736559 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-config-data\") pod \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.736668 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-ceilometer-tls-certs\") pod \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.736713 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhx77\" (UniqueName: \"kubernetes.io/projected/da9d5b68-94f7-43cd-8fb0-37aabb0449de-kube-api-access-jhx77\") pod \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.736768 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-sg-core-conf-yaml\") pod \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.736799 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da9d5b68-94f7-43cd-8fb0-37aabb0449de-log-httpd\") pod \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.736819 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-combined-ca-bundle\") pod \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.736839 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da9d5b68-94f7-43cd-8fb0-37aabb0449de-run-httpd\") pod \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.736861 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-scripts\") pod \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\" (UID: \"da9d5b68-94f7-43cd-8fb0-37aabb0449de\") " Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.738199 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da9d5b68-94f7-43cd-8fb0-37aabb0449de-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "da9d5b68-94f7-43cd-8fb0-37aabb0449de" (UID: "da9d5b68-94f7-43cd-8fb0-37aabb0449de"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.740120 4832 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da9d5b68-94f7-43cd-8fb0-37aabb0449de-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.741849 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da9d5b68-94f7-43cd-8fb0-37aabb0449de-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "da9d5b68-94f7-43cd-8fb0-37aabb0449de" (UID: "da9d5b68-94f7-43cd-8fb0-37aabb0449de"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.744535 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da9d5b68-94f7-43cd-8fb0-37aabb0449de-kube-api-access-jhx77" (OuterVolumeSpecName: "kube-api-access-jhx77") pod "da9d5b68-94f7-43cd-8fb0-37aabb0449de" (UID: "da9d5b68-94f7-43cd-8fb0-37aabb0449de"). InnerVolumeSpecName "kube-api-access-jhx77". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.754815 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-scripts" (OuterVolumeSpecName: "scripts") pod "da9d5b68-94f7-43cd-8fb0-37aabb0449de" (UID: "da9d5b68-94f7-43cd-8fb0-37aabb0449de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.780941 4832 scope.go:117] "RemoveContainer" containerID="7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.837580 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "da9d5b68-94f7-43cd-8fb0-37aabb0449de" (UID: "da9d5b68-94f7-43cd-8fb0-37aabb0449de"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.842444 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhx77\" (UniqueName: \"kubernetes.io/projected/da9d5b68-94f7-43cd-8fb0-37aabb0449de-kube-api-access-jhx77\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.842492 4832 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.842503 4832 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/da9d5b68-94f7-43cd-8fb0-37aabb0449de-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.842514 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.882343 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "da9d5b68-94f7-43cd-8fb0-37aabb0449de" (UID: "da9d5b68-94f7-43cd-8fb0-37aabb0449de"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.931983 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-config-data" (OuterVolumeSpecName: "config-data") pod "da9d5b68-94f7-43cd-8fb0-37aabb0449de" (UID: "da9d5b68-94f7-43cd-8fb0-37aabb0449de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.942509 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da9d5b68-94f7-43cd-8fb0-37aabb0449de" (UID: "da9d5b68-94f7-43cd-8fb0-37aabb0449de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.944492 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.944515 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:52 crc kubenswrapper[4832]: I1002 18:46:52.944525 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da9d5b68-94f7-43cd-8fb0-37aabb0449de-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.044379 4832 scope.go:117] "RemoveContainer" containerID="b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.085962 4832 scope.go:117] "RemoveContainer" containerID="1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5" Oct 02 18:46:53 crc kubenswrapper[4832]: E1002 18:46:53.086872 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5\": container with ID starting with 1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5 not found: ID does not exist" containerID="1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.086916 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5"} err="failed to get container status \"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5\": rpc error: code = NotFound desc = could not find container \"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5\": container with ID starting with 1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5 not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.086943 4832 scope.go:117] "RemoveContainer" containerID="88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4" Oct 02 18:46:53 crc kubenswrapper[4832]: E1002 18:46:53.087461 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4\": container with ID starting with 88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4 not found: ID does not exist" containerID="88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.087499 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4"} err="failed to get container status \"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4\": rpc error: code = NotFound desc = could not find container \"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4\": container with ID starting with 88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4 not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.087521 4832 scope.go:117] "RemoveContainer" containerID="7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d" Oct 02 18:46:53 crc kubenswrapper[4832]: E1002 18:46:53.087877 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d\": container with ID starting with 7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d not found: ID does not exist" containerID="7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.087911 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d"} err="failed to get container status \"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d\": rpc error: code = NotFound desc = could not find container \"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d\": container with ID starting with 7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.087925 4832 scope.go:117] "RemoveContainer" containerID="b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5" Oct 02 18:46:53 crc kubenswrapper[4832]: E1002 18:46:53.088558 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5\": container with ID starting with b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5 not found: ID does not exist" containerID="b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.088587 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5"} err="failed to get container status \"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5\": rpc error: code = NotFound desc = could not find container \"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5\": container with ID starting with b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5 not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.088605 4832 scope.go:117] "RemoveContainer" containerID="1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.088977 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5"} err="failed to get container status \"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5\": rpc error: code = NotFound desc = could not find container \"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5\": container with ID starting with 1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5 not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.089030 4832 scope.go:117] "RemoveContainer" containerID="88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.089336 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4"} err="failed to get container status \"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4\": rpc error: code = NotFound desc = could not find container \"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4\": container with ID starting with 88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4 not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.089359 4832 scope.go:117] "RemoveContainer" containerID="7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.089749 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d"} err="failed to get container status \"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d\": rpc error: code = NotFound desc = could not find container \"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d\": container with ID starting with 7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.089770 4832 scope.go:117] "RemoveContainer" containerID="b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.090022 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5"} err="failed to get container status \"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5\": rpc error: code = NotFound desc = could not find container \"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5\": container with ID starting with b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5 not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.090044 4832 scope.go:117] "RemoveContainer" containerID="1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.090342 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5"} err="failed to get container status \"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5\": rpc error: code = NotFound desc = could not find container \"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5\": container with ID starting with 1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5 not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.090372 4832 scope.go:117] "RemoveContainer" containerID="88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.090649 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4"} err="failed to get container status \"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4\": rpc error: code = NotFound desc = could not find container \"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4\": container with ID starting with 88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4 not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.090672 4832 scope.go:117] "RemoveContainer" containerID="7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.090909 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d"} err="failed to get container status \"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d\": rpc error: code = NotFound desc = could not find container \"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d\": container with ID starting with 7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.090927 4832 scope.go:117] "RemoveContainer" containerID="b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.091139 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5"} err="failed to get container status \"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5\": rpc error: code = NotFound desc = could not find container \"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5\": container with ID starting with b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5 not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.091158 4832 scope.go:117] "RemoveContainer" containerID="1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.091434 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5"} err="failed to get container status \"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5\": rpc error: code = NotFound desc = could not find container \"1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5\": container with ID starting with 1b8b121cc8ba9de3296ff0dbae87f41193a1aa3afee947e8db76c926ebc65ea5 not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.091458 4832 scope.go:117] "RemoveContainer" containerID="88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.092194 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4"} err="failed to get container status \"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4\": rpc error: code = NotFound desc = could not find container \"88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4\": container with ID starting with 88d757e2bd75a1ba968bd93af2629fdd92916410769a0aa0cfe10c74a78a12b4 not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.092233 4832 scope.go:117] "RemoveContainer" containerID="7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.092839 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d"} err="failed to get container status \"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d\": rpc error: code = NotFound desc = could not find container \"7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d\": container with ID starting with 7eb3c214d64304201a582e453792bba721b80e714aafe1920ed65808cba5f08d not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.092862 4832 scope.go:117] "RemoveContainer" containerID="b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.093135 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5"} err="failed to get container status \"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5\": rpc error: code = NotFound desc = could not find container \"b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5\": container with ID starting with b8526b33afc31411623ace46cd5f6945d7bb711c4c9e5df5a0851da98a26f7c5 not found: ID does not exist" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.671533 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.703907 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.719447 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.739902 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:46:53 crc kubenswrapper[4832]: E1002 18:46:53.745857 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="ceilometer-notification-agent" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.745886 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="ceilometer-notification-agent" Oct 02 18:46:53 crc kubenswrapper[4832]: E1002 18:46:53.745924 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="ceilometer-central-agent" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.745931 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="ceilometer-central-agent" Oct 02 18:46:53 crc kubenswrapper[4832]: E1002 18:46:53.745952 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="proxy-httpd" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.745959 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="proxy-httpd" Oct 02 18:46:53 crc kubenswrapper[4832]: E1002 18:46:53.745971 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="sg-core" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.745976 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="sg-core" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.746212 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="ceilometer-notification-agent" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.746223 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="ceilometer-central-agent" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.746245 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="proxy-httpd" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.746275 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" containerName="sg-core" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.748306 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.751994 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.752073 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.752294 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.772011 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.867421 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-config-data\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.867479 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-scripts\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.867573 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a0ac381-9d1a-4068-b5bb-350b3979485e-run-httpd\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.867625 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mfkv\" (UniqueName: \"kubernetes.io/projected/8a0ac381-9d1a-4068-b5bb-350b3979485e-kube-api-access-5mfkv\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.867687 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.867754 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.867843 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a0ac381-9d1a-4068-b5bb-350b3979485e-log-httpd\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.867892 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.970322 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.970379 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a0ac381-9d1a-4068-b5bb-350b3979485e-log-httpd\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.970434 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.970501 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-config-data\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.970516 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-scripts\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.970558 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a0ac381-9d1a-4068-b5bb-350b3979485e-run-httpd\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.970597 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mfkv\" (UniqueName: \"kubernetes.io/projected/8a0ac381-9d1a-4068-b5bb-350b3979485e-kube-api-access-5mfkv\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.970614 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.971444 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a0ac381-9d1a-4068-b5bb-350b3979485e-run-httpd\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.971747 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8a0ac381-9d1a-4068-b5bb-350b3979485e-log-httpd\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.975750 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.976530 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-scripts\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.977922 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.979749 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.981083 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a0ac381-9d1a-4068-b5bb-350b3979485e-config-data\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:53 crc kubenswrapper[4832]: I1002 18:46:53.993872 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mfkv\" (UniqueName: \"kubernetes.io/projected/8a0ac381-9d1a-4068-b5bb-350b3979485e-kube-api-access-5mfkv\") pod \"ceilometer-0\" (UID: \"8a0ac381-9d1a-4068-b5bb-350b3979485e\") " pod="openstack/ceilometer-0" Oct 02 18:46:54 crc kubenswrapper[4832]: I1002 18:46:54.081338 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 02 18:46:54 crc kubenswrapper[4832]: I1002 18:46:54.650700 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 02 18:46:54 crc kubenswrapper[4832]: W1002 18:46:54.679790 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a0ac381_9d1a_4068_b5bb_350b3979485e.slice/crio-0d3d56690f2979c834f0fbf555c1c85feb20e4a096f571a6a0582622a816da17 WatchSource:0}: Error finding container 0d3d56690f2979c834f0fbf555c1c85feb20e4a096f571a6a0582622a816da17: Status 404 returned error can't find the container with id 0d3d56690f2979c834f0fbf555c1c85feb20e4a096f571a6a0582622a816da17 Oct 02 18:46:55 crc kubenswrapper[4832]: I1002 18:46:55.242650 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da9d5b68-94f7-43cd-8fb0-37aabb0449de" path="/var/lib/kubelet/pods/da9d5b68-94f7-43cd-8fb0-37aabb0449de/volumes" Oct 02 18:46:55 crc kubenswrapper[4832]: I1002 18:46:55.696520 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a0ac381-9d1a-4068-b5bb-350b3979485e","Type":"ContainerStarted","Data":"0d3d56690f2979c834f0fbf555c1c85feb20e4a096f571a6a0582622a816da17"} Oct 02 18:46:56 crc kubenswrapper[4832]: I1002 18:46:56.914677 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="4ff074fc-c56e-40f3-a327-b829d84c9866" containerName="rabbitmq" containerID="cri-o://c42a03d3fac22f9fe9775d8529908cd52a14bdd1194f0e8c257c308ef4cfb443" gracePeriod=604796 Oct 02 18:46:57 crc kubenswrapper[4832]: I1002 18:46:57.007167 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="c9fd4cd0-fd84-45cb-9c68-0985f52a1054" containerName="rabbitmq" containerID="cri-o://f4b873ee8011967884ef7ed7faeabb5852e3b54d5ce9de869f2dc8439b006c3f" gracePeriod=604796 Oct 02 18:47:01 crc kubenswrapper[4832]: I1002 18:47:01.777498 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="4ff074fc-c56e-40f3-a327-b829d84c9866" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Oct 02 18:47:02 crc kubenswrapper[4832]: I1002 18:47:02.022610 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="c9fd4cd0-fd84-45cb-9c68-0985f52a1054" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Oct 02 18:47:04 crc kubenswrapper[4832]: I1002 18:47:04.813506 4832 generic.go:334] "Generic (PLEG): container finished" podID="c9fd4cd0-fd84-45cb-9c68-0985f52a1054" containerID="f4b873ee8011967884ef7ed7faeabb5852e3b54d5ce9de869f2dc8439b006c3f" exitCode=0 Oct 02 18:47:04 crc kubenswrapper[4832]: I1002 18:47:04.813593 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c9fd4cd0-fd84-45cb-9c68-0985f52a1054","Type":"ContainerDied","Data":"f4b873ee8011967884ef7ed7faeabb5852e3b54d5ce9de869f2dc8439b006c3f"} Oct 02 18:47:04 crc kubenswrapper[4832]: I1002 18:47:04.817310 4832 generic.go:334] "Generic (PLEG): container finished" podID="4ff074fc-c56e-40f3-a327-b829d84c9866" containerID="c42a03d3fac22f9fe9775d8529908cd52a14bdd1194f0e8c257c308ef4cfb443" exitCode=0 Oct 02 18:47:04 crc kubenswrapper[4832]: I1002 18:47:04.817366 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4ff074fc-c56e-40f3-a327-b829d84c9866","Type":"ContainerDied","Data":"c42a03d3fac22f9fe9775d8529908cd52a14bdd1194f0e8c257c308ef4cfb443"} Oct 02 18:47:05 crc kubenswrapper[4832]: I1002 18:47:05.582247 4832 scope.go:117] "RemoveContainer" containerID="a912356fa8f9d772762e7c377369ee2f6c3e83f78eb8f3dc619a9a4c4fc9c2a0" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.215178 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-cwwd2"] Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.218114 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.220875 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.237946 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-cwwd2"] Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.383366 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.383677 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jbtv\" (UniqueName: \"kubernetes.io/projected/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-kube-api-access-6jbtv\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.383959 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.384107 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.385060 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.385188 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.385329 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-config\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.491873 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.491950 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.492009 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-config\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.492108 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.492191 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jbtv\" (UniqueName: \"kubernetes.io/projected/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-kube-api-access-6jbtv\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.492448 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.492505 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.493193 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-config\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.493431 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.494532 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.494728 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.495248 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.495682 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.515765 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jbtv\" (UniqueName: \"kubernetes.io/projected/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-kube-api-access-6jbtv\") pod \"dnsmasq-dns-7d84b4d45c-cwwd2\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:12 crc kubenswrapper[4832]: I1002 18:47:12.554083 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:14 crc kubenswrapper[4832]: E1002 18:47:14.268830 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Oct 02 18:47:14 crc kubenswrapper[4832]: E1002 18:47:14.269175 4832 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Oct 02 18:47:14 crc kubenswrapper[4832]: E1002 18:47:14.269325 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n87h56dh596h649h96hbdh64h7dh57h645h68dh78h54h55fh8dh54dh57fh654h575hb8h576h598h678h68chc9h645h679h67h574h55h9fhbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mfkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8a0ac381-9d1a-4068-b5bb-350b3979485e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:47:14 crc kubenswrapper[4832]: E1002 18:47:14.753305 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Oct 02 18:47:14 crc kubenswrapper[4832]: E1002 18:47:14.753558 4832 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Oct 02 18:47:14 crc kubenswrapper[4832]: E1002 18:47:14.754100 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25m4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-lsb6q_openstack(36e2b7cf-ba0c-4217-93e4-503ce1e40755): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 18:47:14 crc kubenswrapper[4832]: E1002 18:47:14.755326 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-lsb6q" podUID="36e2b7cf-ba0c-4217-93e4-503ce1e40755" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.876551 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.878108 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951329 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-plugins\") pod \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951402 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ff074fc-c56e-40f3-a327-b829d84c9866-erlang-cookie-secret\") pod \"4ff074fc-c56e-40f3-a327-b829d84c9866\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951441 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-plugins-conf\") pod \"4ff074fc-c56e-40f3-a327-b829d84c9866\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951488 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-erlang-cookie\") pod \"4ff074fc-c56e-40f3-a327-b829d84c9866\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951508 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-tls\") pod \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951535 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-config-data\") pod \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951604 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ff074fc-c56e-40f3-a327-b829d84c9866-pod-info\") pod \"4ff074fc-c56e-40f3-a327-b829d84c9866\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951619 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-erlang-cookie-secret\") pod \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951653 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-plugins\") pod \"4ff074fc-c56e-40f3-a327-b829d84c9866\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951684 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-plugins-conf\") pod \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951717 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-config-data\") pod \"4ff074fc-c56e-40f3-a327-b829d84c9866\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951744 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-pod-info\") pod \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951787 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-server-conf\") pod \"4ff074fc-c56e-40f3-a327-b829d84c9866\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951805 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"4ff074fc-c56e-40f3-a327-b829d84c9866\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951834 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-confd\") pod \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951863 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951925 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-erlang-cookie\") pod \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.951999 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-tls\") pod \"4ff074fc-c56e-40f3-a327-b829d84c9866\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.952018 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd2lw\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-kube-api-access-bd2lw\") pod \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.952040 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-confd\") pod \"4ff074fc-c56e-40f3-a327-b829d84c9866\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.952054 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-server-conf\") pod \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\" (UID: \"c9fd4cd0-fd84-45cb-9c68-0985f52a1054\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.952092 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfv4z\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-kube-api-access-nfv4z\") pod \"4ff074fc-c56e-40f3-a327-b829d84c9866\" (UID: \"4ff074fc-c56e-40f3-a327-b829d84c9866\") " Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.952809 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4ff074fc-c56e-40f3-a327-b829d84c9866" (UID: "4ff074fc-c56e-40f3-a327-b829d84c9866"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.953539 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c9fd4cd0-fd84-45cb-9c68-0985f52a1054" (UID: "c9fd4cd0-fd84-45cb-9c68-0985f52a1054"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.964197 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4ff074fc-c56e-40f3-a327-b829d84c9866" (UID: "4ff074fc-c56e-40f3-a327-b829d84c9866"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.965608 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ff074fc-c56e-40f3-a327-b829d84c9866-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4ff074fc-c56e-40f3-a327-b829d84c9866" (UID: "4ff074fc-c56e-40f3-a327-b829d84c9866"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.970734 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c9fd4cd0-fd84-45cb-9c68-0985f52a1054","Type":"ContainerDied","Data":"70c3346e26d3494cc77da6d7674cb5cae6d44c053602109b0f8b08f5ffaf2b11"} Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.971690 4832 scope.go:117] "RemoveContainer" containerID="f4b873ee8011967884ef7ed7faeabb5852e3b54d5ce9de869f2dc8439b006c3f" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.972011 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.971214 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c9fd4cd0-fd84-45cb-9c68-0985f52a1054" (UID: "c9fd4cd0-fd84-45cb-9c68-0985f52a1054"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.972473 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-pod-info" (OuterVolumeSpecName: "pod-info") pod "c9fd4cd0-fd84-45cb-9c68-0985f52a1054" (UID: "c9fd4cd0-fd84-45cb-9c68-0985f52a1054"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.977006 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4ff074fc-c56e-40f3-a327-b829d84c9866","Type":"ContainerDied","Data":"2db439288a0da92db7ed6714a958e0cc6742fe4b130cbced48a5188ad1dca67e"} Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.981005 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.982020 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "c9fd4cd0-fd84-45cb-9c68-0985f52a1054" (UID: "c9fd4cd0-fd84-45cb-9c68-0985f52a1054"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.983321 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "4ff074fc-c56e-40f3-a327-b829d84c9866" (UID: "4ff074fc-c56e-40f3-a327-b829d84c9866"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.986903 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4ff074fc-c56e-40f3-a327-b829d84c9866" (UID: "4ff074fc-c56e-40f3-a327-b829d84c9866"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.977608 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c9fd4cd0-fd84-45cb-9c68-0985f52a1054" (UID: "c9fd4cd0-fd84-45cb-9c68-0985f52a1054"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.989624 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c9fd4cd0-fd84-45cb-9c68-0985f52a1054" (UID: "c9fd4cd0-fd84-45cb-9c68-0985f52a1054"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:47:14 crc kubenswrapper[4832]: I1002 18:47:14.993706 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-kube-api-access-nfv4z" (OuterVolumeSpecName: "kube-api-access-nfv4z") pod "4ff074fc-c56e-40f3-a327-b829d84c9866" (UID: "4ff074fc-c56e-40f3-a327-b829d84c9866"). InnerVolumeSpecName "kube-api-access-nfv4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:47:15 crc kubenswrapper[4832]: E1002 18:47:15.001482 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-lsb6q" podUID="36e2b7cf-ba0c-4217-93e4-503ce1e40755" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.002802 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4ff074fc-c56e-40f3-a327-b829d84c9866-pod-info" (OuterVolumeSpecName: "pod-info") pod "4ff074fc-c56e-40f3-a327-b829d84c9866" (UID: "4ff074fc-c56e-40f3-a327-b829d84c9866"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.012058 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4ff074fc-c56e-40f3-a327-b829d84c9866" (UID: "4ff074fc-c56e-40f3-a327-b829d84c9866"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.017920 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c9fd4cd0-fd84-45cb-9c68-0985f52a1054" (UID: "c9fd4cd0-fd84-45cb-9c68-0985f52a1054"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.048544 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-kube-api-access-bd2lw" (OuterVolumeSpecName: "kube-api-access-bd2lw") pod "c9fd4cd0-fd84-45cb-9c68-0985f52a1054" (UID: "c9fd4cd0-fd84-45cb-9c68-0985f52a1054"). InnerVolumeSpecName "kube-api-access-bd2lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059088 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059120 4832 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ff074fc-c56e-40f3-a327-b829d84c9866-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059130 4832 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059139 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059148 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059155 4832 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ff074fc-c56e-40f3-a327-b829d84c9866-pod-info\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059163 4832 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059171 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059180 4832 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059188 4832 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-pod-info\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059206 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059218 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059229 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059239 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059248 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd2lw\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-kube-api-access-bd2lw\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.059256 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfv4z\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-kube-api-access-nfv4z\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.075237 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-config-data" (OuterVolumeSpecName: "config-data") pod "4ff074fc-c56e-40f3-a327-b829d84c9866" (UID: "4ff074fc-c56e-40f3-a327-b829d84c9866"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.121340 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-config-data" (OuterVolumeSpecName: "config-data") pod "c9fd4cd0-fd84-45cb-9c68-0985f52a1054" (UID: "c9fd4cd0-fd84-45cb-9c68-0985f52a1054"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.124586 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-server-conf" (OuterVolumeSpecName: "server-conf") pod "4ff074fc-c56e-40f3-a327-b829d84c9866" (UID: "4ff074fc-c56e-40f3-a327-b829d84c9866"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.137395 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.155074 4832 scope.go:117] "RemoveContainer" containerID="aab15a231a3fad3f097a709bc28c3653595cdc6fce81258a72e2be1d69cc979f" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.161225 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.161252 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.161311 4832 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ff074fc-c56e-40f3-a327-b829d84c9866-server-conf\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.161322 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.161729 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.166629 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-server-conf" (OuterVolumeSpecName: "server-conf") pod "c9fd4cd0-fd84-45cb-9c68-0985f52a1054" (UID: "c9fd4cd0-fd84-45cb-9c68-0985f52a1054"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.196929 4832 scope.go:117] "RemoveContainer" containerID="c42a03d3fac22f9fe9775d8529908cd52a14bdd1194f0e8c257c308ef4cfb443" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.207162 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4ff074fc-c56e-40f3-a327-b829d84c9866" (UID: "4ff074fc-c56e-40f3-a327-b829d84c9866"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.209944 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c9fd4cd0-fd84-45cb-9c68-0985f52a1054" (UID: "c9fd4cd0-fd84-45cb-9c68-0985f52a1054"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.223457 4832 scope.go:117] "RemoveContainer" containerID="b21ca3741d0fca37291dae0e91f9ff546ba06e2cd3972007eb0e57226a7359cc" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.266398 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.266437 4832 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ff074fc-c56e-40f3-a327-b829d84c9866-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.266448 4832 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9fd4cd0-fd84-45cb-9c68-0985f52a1054-server-conf\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.266457 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.322316 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.355540 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.365297 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.377620 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.420806 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 18:47:15 crc kubenswrapper[4832]: E1002 18:47:15.421976 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff074fc-c56e-40f3-a327-b829d84c9866" containerName="rabbitmq" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.422008 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff074fc-c56e-40f3-a327-b829d84c9866" containerName="rabbitmq" Oct 02 18:47:15 crc kubenswrapper[4832]: E1002 18:47:15.422041 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fd4cd0-fd84-45cb-9c68-0985f52a1054" containerName="setup-container" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.422049 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fd4cd0-fd84-45cb-9c68-0985f52a1054" containerName="setup-container" Oct 02 18:47:15 crc kubenswrapper[4832]: E1002 18:47:15.422072 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ff074fc-c56e-40f3-a327-b829d84c9866" containerName="setup-container" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.422077 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ff074fc-c56e-40f3-a327-b829d84c9866" containerName="setup-container" Oct 02 18:47:15 crc kubenswrapper[4832]: E1002 18:47:15.422118 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fd4cd0-fd84-45cb-9c68-0985f52a1054" containerName="rabbitmq" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.422125 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fd4cd0-fd84-45cb-9c68-0985f52a1054" containerName="rabbitmq" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.423372 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9fd4cd0-fd84-45cb-9c68-0985f52a1054" containerName="rabbitmq" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.423421 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ff074fc-c56e-40f3-a327-b829d84c9866" containerName="rabbitmq" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.426725 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.432153 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.433039 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.433787 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.434403 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.435140 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.435304 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4vkmm" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.435501 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.437312 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.443988 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.447215 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.448321 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.448529 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rn6lz" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.447325 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.448884 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.449227 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.450358 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.456199 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.483184 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.574790 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ab42783-2e22-4b2f-9fab-be96ba65e345-config-data\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.574828 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9ab42783-2e22-4b2f-9fab-be96ba65e345-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.574858 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.574878 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9ab42783-2e22-4b2f-9fab-be96ba65e345-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.574988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575046 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575065 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9ab42783-2e22-4b2f-9fab-be96ba65e345-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575104 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gc7w\" (UniqueName: \"kubernetes.io/projected/9ab42783-2e22-4b2f-9fab-be96ba65e345-kube-api-access-4gc7w\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575126 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575198 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575241 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9ab42783-2e22-4b2f-9fab-be96ba65e345-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575281 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575361 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9ab42783-2e22-4b2f-9fab-be96ba65e345-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575402 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575425 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9ab42783-2e22-4b2f-9fab-be96ba65e345-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575507 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9ab42783-2e22-4b2f-9fab-be96ba65e345-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575591 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv958\" (UniqueName: \"kubernetes.io/projected/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-kube-api-access-xv958\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575606 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575630 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575649 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575688 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9ab42783-2e22-4b2f-9fab-be96ba65e345-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.575722 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: W1002 18:47:15.582241 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ddcf4a8_ee94_4877_b399_19a6f872a0c5.slice/crio-0c7c35e7d9c437a5e23dcd3ba5cfdddd019627c4411133a74dcd809a91b1058f WatchSource:0}: Error finding container 0c7c35e7d9c437a5e23dcd3ba5cfdddd019627c4411133a74dcd809a91b1058f: Status 404 returned error can't find the container with id 0c7c35e7d9c437a5e23dcd3ba5cfdddd019627c4411133a74dcd809a91b1058f Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.587859 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-cwwd2"] Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.677877 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678104 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678128 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9ab42783-2e22-4b2f-9fab-be96ba65e345-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678146 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678166 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gc7w\" (UniqueName: \"kubernetes.io/projected/9ab42783-2e22-4b2f-9fab-be96ba65e345-kube-api-access-4gc7w\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678203 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678230 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9ab42783-2e22-4b2f-9fab-be96ba65e345-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678249 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678304 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9ab42783-2e22-4b2f-9fab-be96ba65e345-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678332 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678357 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9ab42783-2e22-4b2f-9fab-be96ba65e345-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678395 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9ab42783-2e22-4b2f-9fab-be96ba65e345-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678432 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv958\" (UniqueName: \"kubernetes.io/projected/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-kube-api-access-xv958\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678448 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678466 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678483 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678508 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9ab42783-2e22-4b2f-9fab-be96ba65e345-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678529 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678578 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ab42783-2e22-4b2f-9fab-be96ba65e345-config-data\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678595 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9ab42783-2e22-4b2f-9fab-be96ba65e345-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678618 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678636 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9ab42783-2e22-4b2f-9fab-be96ba65e345-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.678844 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9ab42783-2e22-4b2f-9fab-be96ba65e345-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.679205 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.679537 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.679829 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.680146 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.680219 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9ab42783-2e22-4b2f-9fab-be96ba65e345-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.680404 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9ab42783-2e22-4b2f-9fab-be96ba65e345-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.680831 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ab42783-2e22-4b2f-9fab-be96ba65e345-config-data\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.681469 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.682205 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.682620 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9ab42783-2e22-4b2f-9fab-be96ba65e345-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.682731 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9ab42783-2e22-4b2f-9fab-be96ba65e345-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.683018 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.683316 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9ab42783-2e22-4b2f-9fab-be96ba65e345-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.685529 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9ab42783-2e22-4b2f-9fab-be96ba65e345-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.686014 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.688292 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.689647 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9ab42783-2e22-4b2f-9fab-be96ba65e345-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.690012 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.690524 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.700393 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gc7w\" (UniqueName: \"kubernetes.io/projected/9ab42783-2e22-4b2f-9fab-be96ba65e345-kube-api-access-4gc7w\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.701363 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv958\" (UniqueName: \"kubernetes.io/projected/c87efd10-3959-4dfa-ab6a-88810fe9a0fa-kube-api-access-xv958\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.740530 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c87efd10-3959-4dfa-ab6a-88810fe9a0fa\") " pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.755005 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"9ab42783-2e22-4b2f-9fab-be96ba65e345\") " pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.777977 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 02 18:47:15 crc kubenswrapper[4832]: I1002 18:47:15.792003 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:16 crc kubenswrapper[4832]: I1002 18:47:16.004761 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a0ac381-9d1a-4068-b5bb-350b3979485e","Type":"ContainerStarted","Data":"15783642403cc79e56dfdf258fe0ee4d3b5716df9116a0ee71fbe90118cbb9e2"} Oct 02 18:47:16 crc kubenswrapper[4832]: I1002 18:47:16.006998 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" event={"ID":"5ddcf4a8-ee94-4877-b399-19a6f872a0c5","Type":"ContainerStarted","Data":"0c7c35e7d9c437a5e23dcd3ba5cfdddd019627c4411133a74dcd809a91b1058f"} Oct 02 18:47:16 crc kubenswrapper[4832]: I1002 18:47:16.413697 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 02 18:47:16 crc kubenswrapper[4832]: I1002 18:47:16.424521 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 02 18:47:16 crc kubenswrapper[4832]: W1002 18:47:16.437925 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc87efd10_3959_4dfa_ab6a_88810fe9a0fa.slice/crio-dcc058e43277daf9b15fe4dae7b1054e6e7175478a3eab8a9d25c710e43207ff WatchSource:0}: Error finding container dcc058e43277daf9b15fe4dae7b1054e6e7175478a3eab8a9d25c710e43207ff: Status 404 returned error can't find the container with id dcc058e43277daf9b15fe4dae7b1054e6e7175478a3eab8a9d25c710e43207ff Oct 02 18:47:16 crc kubenswrapper[4832]: I1002 18:47:16.778423 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="4ff074fc-c56e-40f3-a327-b829d84c9866" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: i/o timeout" Oct 02 18:47:17 crc kubenswrapper[4832]: I1002 18:47:17.022662 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="c9fd4cd0-fd84-45cb-9c68-0985f52a1054" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: i/o timeout" Oct 02 18:47:17 crc kubenswrapper[4832]: I1002 18:47:17.036164 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a0ac381-9d1a-4068-b5bb-350b3979485e","Type":"ContainerStarted","Data":"8eaa9e5cfe809033a697a5d753174aad96904a4d225dc0d2d6296c037f90e54e"} Oct 02 18:47:17 crc kubenswrapper[4832]: I1002 18:47:17.038391 4832 generic.go:334] "Generic (PLEG): container finished" podID="5ddcf4a8-ee94-4877-b399-19a6f872a0c5" containerID="5c71055820ba1708880974235af8a408edf28fe09af93dcbd052a61f586ada72" exitCode=0 Oct 02 18:47:17 crc kubenswrapper[4832]: I1002 18:47:17.038479 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" event={"ID":"5ddcf4a8-ee94-4877-b399-19a6f872a0c5","Type":"ContainerDied","Data":"5c71055820ba1708880974235af8a408edf28fe09af93dcbd052a61f586ada72"} Oct 02 18:47:17 crc kubenswrapper[4832]: I1002 18:47:17.041221 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9ab42783-2e22-4b2f-9fab-be96ba65e345","Type":"ContainerStarted","Data":"9b30246f2a765336d9f96d75c1a92382fd892da44fe5c5db6bc7b14c0223b137"} Oct 02 18:47:17 crc kubenswrapper[4832]: I1002 18:47:17.043213 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c87efd10-3959-4dfa-ab6a-88810fe9a0fa","Type":"ContainerStarted","Data":"dcc058e43277daf9b15fe4dae7b1054e6e7175478a3eab8a9d25c710e43207ff"} Oct 02 18:47:17 crc kubenswrapper[4832]: I1002 18:47:17.246565 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ff074fc-c56e-40f3-a327-b829d84c9866" path="/var/lib/kubelet/pods/4ff074fc-c56e-40f3-a327-b829d84c9866/volumes" Oct 02 18:47:17 crc kubenswrapper[4832]: I1002 18:47:17.249240 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9fd4cd0-fd84-45cb-9c68-0985f52a1054" path="/var/lib/kubelet/pods/c9fd4cd0-fd84-45cb-9c68-0985f52a1054/volumes" Oct 02 18:47:18 crc kubenswrapper[4832]: I1002 18:47:18.056220 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" event={"ID":"5ddcf4a8-ee94-4877-b399-19a6f872a0c5","Type":"ContainerStarted","Data":"aac2528547a2f318bc06a9702efcff8c560c535273a1c7d26e02e3a5eebba47a"} Oct 02 18:47:18 crc kubenswrapper[4832]: I1002 18:47:18.056502 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:18 crc kubenswrapper[4832]: I1002 18:47:18.083962 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" podStartSLOduration=6.083945156 podStartE2EDuration="6.083945156s" podCreationTimestamp="2025-10-02 18:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:47:18.073977195 +0000 UTC m=+1595.043420077" watchObservedRunningTime="2025-10-02 18:47:18.083945156 +0000 UTC m=+1595.053388028" Oct 02 18:47:18 crc kubenswrapper[4832]: E1002 18:47:18.727493 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="8a0ac381-9d1a-4068-b5bb-350b3979485e" Oct 02 18:47:19 crc kubenswrapper[4832]: I1002 18:47:19.072095 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9ab42783-2e22-4b2f-9fab-be96ba65e345","Type":"ContainerStarted","Data":"bd78c088e632208edba6e847026a5205984036828b1c89ec75081694b27472cc"} Oct 02 18:47:19 crc kubenswrapper[4832]: I1002 18:47:19.074418 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c87efd10-3959-4dfa-ab6a-88810fe9a0fa","Type":"ContainerStarted","Data":"2162ca5b8e0638f6e5c75ca0a3e4465bca593de4dea4ef8029f9c347739be274"} Oct 02 18:47:19 crc kubenswrapper[4832]: I1002 18:47:19.077706 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a0ac381-9d1a-4068-b5bb-350b3979485e","Type":"ContainerStarted","Data":"35c18cd190dfa66b175c249a3117ae75ab066d69a9b5ef01aa2df254073f1556"} Oct 02 18:47:19 crc kubenswrapper[4832]: E1002 18:47:19.079688 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8a0ac381-9d1a-4068-b5bb-350b3979485e" Oct 02 18:47:20 crc kubenswrapper[4832]: I1002 18:47:20.106328 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 02 18:47:20 crc kubenswrapper[4832]: E1002 18:47:20.108695 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8a0ac381-9d1a-4068-b5bb-350b3979485e" Oct 02 18:47:21 crc kubenswrapper[4832]: E1002 18:47:21.118790 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="8a0ac381-9d1a-4068-b5bb-350b3979485e" Oct 02 18:47:22 crc kubenswrapper[4832]: I1002 18:47:22.556396 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:22 crc kubenswrapper[4832]: I1002 18:47:22.632284 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-fmdft"] Oct 02 18:47:22 crc kubenswrapper[4832]: I1002 18:47:22.632582 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" podUID="36bfe800-0313-487f-a2ba-bef9b88ff8c7" containerName="dnsmasq-dns" containerID="cri-o://b345ba202167c38f50777bf5322b833e6807b2109a7e1de77cfed6f0213ae877" gracePeriod=10 Oct 02 18:47:22 crc kubenswrapper[4832]: I1002 18:47:22.899159 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-qvcnk"] Oct 02 18:47:22 crc kubenswrapper[4832]: I1002 18:47:22.902504 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:22 crc kubenswrapper[4832]: I1002 18:47:22.916066 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-qvcnk"] Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.086554 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.086705 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.086749 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-config\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.086837 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.087015 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.087177 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5hw\" (UniqueName: \"kubernetes.io/projected/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-kube-api-access-kt5hw\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.087277 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.148554 4832 generic.go:334] "Generic (PLEG): container finished" podID="36bfe800-0313-487f-a2ba-bef9b88ff8c7" containerID="b345ba202167c38f50777bf5322b833e6807b2109a7e1de77cfed6f0213ae877" exitCode=0 Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.148621 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" event={"ID":"36bfe800-0313-487f-a2ba-bef9b88ff8c7","Type":"ContainerDied","Data":"b345ba202167c38f50777bf5322b833e6807b2109a7e1de77cfed6f0213ae877"} Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.189594 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.189946 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5hw\" (UniqueName: \"kubernetes.io/projected/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-kube-api-access-kt5hw\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.190097 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.190359 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.190507 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.190612 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-config\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.190752 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.191297 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.191713 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.191758 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.193036 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.193746 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-config\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.194376 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.233312 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5hw\" (UniqueName: \"kubernetes.io/projected/2e85de4e-7cb3-48e0-86f7-3faaf7e067d1-kube-api-access-kt5hw\") pod \"dnsmasq-dns-6f6df4f56c-qvcnk\" (UID: \"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1\") " pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.400605 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.498348 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-ovsdbserver-sb\") pod \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.498488 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tbx4\" (UniqueName: \"kubernetes.io/projected/36bfe800-0313-487f-a2ba-bef9b88ff8c7-kube-api-access-4tbx4\") pod \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.498600 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-config\") pod \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.498696 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-dns-swift-storage-0\") pod \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.500184 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-dns-svc\") pod \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.500321 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-ovsdbserver-nb\") pod \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\" (UID: \"36bfe800-0313-487f-a2ba-bef9b88ff8c7\") " Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.513248 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bfe800-0313-487f-a2ba-bef9b88ff8c7-kube-api-access-4tbx4" (OuterVolumeSpecName: "kube-api-access-4tbx4") pod "36bfe800-0313-487f-a2ba-bef9b88ff8c7" (UID: "36bfe800-0313-487f-a2ba-bef9b88ff8c7"). InnerVolumeSpecName "kube-api-access-4tbx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.531994 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.574316 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-config" (OuterVolumeSpecName: "config") pod "36bfe800-0313-487f-a2ba-bef9b88ff8c7" (UID: "36bfe800-0313-487f-a2ba-bef9b88ff8c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.601751 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "36bfe800-0313-487f-a2ba-bef9b88ff8c7" (UID: "36bfe800-0313-487f-a2ba-bef9b88ff8c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.612388 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.612416 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tbx4\" (UniqueName: \"kubernetes.io/projected/36bfe800-0313-487f-a2ba-bef9b88ff8c7-kube-api-access-4tbx4\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.612432 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.645138 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "36bfe800-0313-487f-a2ba-bef9b88ff8c7" (UID: "36bfe800-0313-487f-a2ba-bef9b88ff8c7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.674909 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36bfe800-0313-487f-a2ba-bef9b88ff8c7" (UID: "36bfe800-0313-487f-a2ba-bef9b88ff8c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.697018 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36bfe800-0313-487f-a2ba-bef9b88ff8c7" (UID: "36bfe800-0313-487f-a2ba-bef9b88ff8c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.716839 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.720851 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:23 crc kubenswrapper[4832]: I1002 18:47:23.720874 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36bfe800-0313-487f-a2ba-bef9b88ff8c7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:24 crc kubenswrapper[4832]: I1002 18:47:24.165655 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" event={"ID":"36bfe800-0313-487f-a2ba-bef9b88ff8c7","Type":"ContainerDied","Data":"368026965f5302958ce1d7a5c8142f6966675b523120b97aeaa424c022c6f01b"} Oct 02 18:47:24 crc kubenswrapper[4832]: I1002 18:47:24.165756 4832 scope.go:117] "RemoveContainer" containerID="b345ba202167c38f50777bf5322b833e6807b2109a7e1de77cfed6f0213ae877" Oct 02 18:47:24 crc kubenswrapper[4832]: I1002 18:47:24.165761 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-fmdft" Oct 02 18:47:24 crc kubenswrapper[4832]: I1002 18:47:24.200860 4832 scope.go:117] "RemoveContainer" containerID="401e40ce60292402ec5076a3a607af29fd727972156c77bbbf6d5ad5d8f40e4e" Oct 02 18:47:24 crc kubenswrapper[4832]: I1002 18:47:24.243697 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-qvcnk"] Oct 02 18:47:24 crc kubenswrapper[4832]: I1002 18:47:24.505749 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-fmdft"] Oct 02 18:47:24 crc kubenswrapper[4832]: I1002 18:47:24.514532 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-fmdft"] Oct 02 18:47:25 crc kubenswrapper[4832]: I1002 18:47:25.180375 4832 generic.go:334] "Generic (PLEG): container finished" podID="2e85de4e-7cb3-48e0-86f7-3faaf7e067d1" containerID="79cbecc92707e33e5eba590488ed6c889d7128ab726b0672cffac212d96dd053" exitCode=0 Oct 02 18:47:25 crc kubenswrapper[4832]: I1002 18:47:25.180441 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" event={"ID":"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1","Type":"ContainerDied","Data":"79cbecc92707e33e5eba590488ed6c889d7128ab726b0672cffac212d96dd053"} Oct 02 18:47:25 crc kubenswrapper[4832]: I1002 18:47:25.180809 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" event={"ID":"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1","Type":"ContainerStarted","Data":"8e5497c850c00ca6e778307d51aff12fcc49dd869cb9697cc43dc864b0553ae3"} Oct 02 18:47:25 crc kubenswrapper[4832]: I1002 18:47:25.267851 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36bfe800-0313-487f-a2ba-bef9b88ff8c7" path="/var/lib/kubelet/pods/36bfe800-0313-487f-a2ba-bef9b88ff8c7/volumes" Oct 02 18:47:26 crc kubenswrapper[4832]: I1002 18:47:26.199211 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" event={"ID":"2e85de4e-7cb3-48e0-86f7-3faaf7e067d1","Type":"ContainerStarted","Data":"58700aba53e395fa8e7945160d408dbdc05d261518a72fa7f2c942402bb9fef5"} Oct 02 18:47:26 crc kubenswrapper[4832]: I1002 18:47:26.199811 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:26 crc kubenswrapper[4832]: I1002 18:47:26.234176 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" podStartSLOduration=4.234154284 podStartE2EDuration="4.234154284s" podCreationTimestamp="2025-10-02 18:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:47:26.222661317 +0000 UTC m=+1603.192104179" watchObservedRunningTime="2025-10-02 18:47:26.234154284 +0000 UTC m=+1603.203597156" Oct 02 18:47:28 crc kubenswrapper[4832]: I1002 18:47:28.237254 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-lsb6q" event={"ID":"36e2b7cf-ba0c-4217-93e4-503ce1e40755","Type":"ContainerStarted","Data":"67ffdcd0a2bbf716a6e5e85c8f598529c70b3988728dba06b2606d6b51005967"} Oct 02 18:47:28 crc kubenswrapper[4832]: I1002 18:47:28.259573 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-lsb6q" podStartSLOduration=2.007424721 podStartE2EDuration="39.259545738s" podCreationTimestamp="2025-10-02 18:46:49 +0000 UTC" firstStartedPulling="2025-10-02 18:46:50.230620623 +0000 UTC m=+1567.200063505" lastFinishedPulling="2025-10-02 18:47:27.48274165 +0000 UTC m=+1604.452184522" observedRunningTime="2025-10-02 18:47:28.254005036 +0000 UTC m=+1605.223447908" watchObservedRunningTime="2025-10-02 18:47:28.259545738 +0000 UTC m=+1605.228988610" Oct 02 18:47:30 crc kubenswrapper[4832]: I1002 18:47:30.261489 4832 generic.go:334] "Generic (PLEG): container finished" podID="36e2b7cf-ba0c-4217-93e4-503ce1e40755" containerID="67ffdcd0a2bbf716a6e5e85c8f598529c70b3988728dba06b2606d6b51005967" exitCode=0 Oct 02 18:47:30 crc kubenswrapper[4832]: I1002 18:47:30.261570 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-lsb6q" event={"ID":"36e2b7cf-ba0c-4217-93e4-503ce1e40755","Type":"ContainerDied","Data":"67ffdcd0a2bbf716a6e5e85c8f598529c70b3988728dba06b2606d6b51005967"} Oct 02 18:47:31 crc kubenswrapper[4832]: I1002 18:47:31.774683 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-lsb6q" Oct 02 18:47:31 crc kubenswrapper[4832]: I1002 18:47:31.929478 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25m4d\" (UniqueName: \"kubernetes.io/projected/36e2b7cf-ba0c-4217-93e4-503ce1e40755-kube-api-access-25m4d\") pod \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\" (UID: \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\") " Oct 02 18:47:31 crc kubenswrapper[4832]: I1002 18:47:31.929618 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e2b7cf-ba0c-4217-93e4-503ce1e40755-combined-ca-bundle\") pod \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\" (UID: \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\") " Oct 02 18:47:31 crc kubenswrapper[4832]: I1002 18:47:31.929688 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e2b7cf-ba0c-4217-93e4-503ce1e40755-config-data\") pod \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\" (UID: \"36e2b7cf-ba0c-4217-93e4-503ce1e40755\") " Oct 02 18:47:31 crc kubenswrapper[4832]: I1002 18:47:31.936080 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36e2b7cf-ba0c-4217-93e4-503ce1e40755-kube-api-access-25m4d" (OuterVolumeSpecName: "kube-api-access-25m4d") pod "36e2b7cf-ba0c-4217-93e4-503ce1e40755" (UID: "36e2b7cf-ba0c-4217-93e4-503ce1e40755"). InnerVolumeSpecName "kube-api-access-25m4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:47:31 crc kubenswrapper[4832]: I1002 18:47:31.984203 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36e2b7cf-ba0c-4217-93e4-503ce1e40755-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36e2b7cf-ba0c-4217-93e4-503ce1e40755" (UID: "36e2b7cf-ba0c-4217-93e4-503ce1e40755"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:47:32 crc kubenswrapper[4832]: I1002 18:47:32.033252 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36e2b7cf-ba0c-4217-93e4-503ce1e40755-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:32 crc kubenswrapper[4832]: I1002 18:47:32.033308 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25m4d\" (UniqueName: \"kubernetes.io/projected/36e2b7cf-ba0c-4217-93e4-503ce1e40755-kube-api-access-25m4d\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:32 crc kubenswrapper[4832]: I1002 18:47:32.056002 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36e2b7cf-ba0c-4217-93e4-503ce1e40755-config-data" (OuterVolumeSpecName: "config-data") pod "36e2b7cf-ba0c-4217-93e4-503ce1e40755" (UID: "36e2b7cf-ba0c-4217-93e4-503ce1e40755"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:47:32 crc kubenswrapper[4832]: I1002 18:47:32.135366 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36e2b7cf-ba0c-4217-93e4-503ce1e40755-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:32 crc kubenswrapper[4832]: I1002 18:47:32.300816 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-lsb6q" event={"ID":"36e2b7cf-ba0c-4217-93e4-503ce1e40755","Type":"ContainerDied","Data":"0a865f8a8942b545bdfdfae03660e56d1450a5e65955cabf3b9adc26c4aa69bf"} Oct 02 18:47:32 crc kubenswrapper[4832]: I1002 18:47:32.300885 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a865f8a8942b545bdfdfae03660e56d1450a5e65955cabf3b9adc26c4aa69bf" Oct 02 18:47:32 crc kubenswrapper[4832]: I1002 18:47:32.300951 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-lsb6q" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.427456 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5779d8467c-rr8wn"] Oct 02 18:47:33 crc kubenswrapper[4832]: E1002 18:47:33.428238 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e2b7cf-ba0c-4217-93e4-503ce1e40755" containerName="heat-db-sync" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.428254 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e2b7cf-ba0c-4217-93e4-503ce1e40755" containerName="heat-db-sync" Oct 02 18:47:33 crc kubenswrapper[4832]: E1002 18:47:33.428299 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36bfe800-0313-487f-a2ba-bef9b88ff8c7" containerName="init" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.428308 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="36bfe800-0313-487f-a2ba-bef9b88ff8c7" containerName="init" Oct 02 18:47:33 crc kubenswrapper[4832]: E1002 18:47:33.428322 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36bfe800-0313-487f-a2ba-bef9b88ff8c7" containerName="dnsmasq-dns" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.428332 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="36bfe800-0313-487f-a2ba-bef9b88ff8c7" containerName="dnsmasq-dns" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.428659 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="36bfe800-0313-487f-a2ba-bef9b88ff8c7" containerName="dnsmasq-dns" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.428679 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="36e2b7cf-ba0c-4217-93e4-503ce1e40755" containerName="heat-db-sync" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.429507 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.455908 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5b65db8df4-nckpl"] Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.457833 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.474060 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5779d8467c-rr8wn"] Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.496464 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-698cc5cc6c-gmw7p"] Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.498871 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.520762 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5b65db8df4-nckpl"] Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.533444 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-qvcnk" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.567745 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-698cc5cc6c-gmw7p"] Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581501 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlf58\" (UniqueName: \"kubernetes.io/projected/be959889-fe35-4de3-b7b2-82df67812b7d-kube-api-access-hlf58\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581545 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c24b8-fca2-49c2-8f1c-a41614962b83-combined-ca-bundle\") pod \"heat-engine-5779d8467c-rr8wn\" (UID: \"fb6c24b8-fca2-49c2-8f1c-a41614962b83\") " pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581586 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c24b8-fca2-49c2-8f1c-a41614962b83-config-data\") pod \"heat-engine-5779d8467c-rr8wn\" (UID: \"fb6c24b8-fca2-49c2-8f1c-a41614962b83\") " pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581614 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84x4x\" (UniqueName: \"kubernetes.io/projected/fb6c24b8-fca2-49c2-8f1c-a41614962b83-kube-api-access-84x4x\") pod \"heat-engine-5779d8467c-rr8wn\" (UID: \"fb6c24b8-fca2-49c2-8f1c-a41614962b83\") " pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581641 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb6c24b8-fca2-49c2-8f1c-a41614962b83-config-data-custom\") pod \"heat-engine-5779d8467c-rr8wn\" (UID: \"fb6c24b8-fca2-49c2-8f1c-a41614962b83\") " pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581680 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dd5p\" (UniqueName: \"kubernetes.io/projected/138ff508-ca7b-4291-8f0d-90ddc11770fb-kube-api-access-9dd5p\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581739 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-public-tls-certs\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581754 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-config-data\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581775 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-config-data-custom\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581805 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-internal-tls-certs\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581854 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-config-data\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581879 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-combined-ca-bundle\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581910 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-config-data-custom\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581951 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-combined-ca-bundle\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.581973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-public-tls-certs\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.582003 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-internal-tls-certs\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.632242 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-cwwd2"] Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.632487 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" podUID="5ddcf4a8-ee94-4877-b399-19a6f872a0c5" containerName="dnsmasq-dns" containerID="cri-o://aac2528547a2f318bc06a9702efcff8c560c535273a1c7d26e02e3a5eebba47a" gracePeriod=10 Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.683828 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-internal-tls-certs\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.683921 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-config-data\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.683944 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-combined-ca-bundle\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.683978 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-config-data-custom\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.684027 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-combined-ca-bundle\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.684080 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-public-tls-certs\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.684118 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-internal-tls-certs\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.684168 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c24b8-fca2-49c2-8f1c-a41614962b83-combined-ca-bundle\") pod \"heat-engine-5779d8467c-rr8wn\" (UID: \"fb6c24b8-fca2-49c2-8f1c-a41614962b83\") " pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.684186 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlf58\" (UniqueName: \"kubernetes.io/projected/be959889-fe35-4de3-b7b2-82df67812b7d-kube-api-access-hlf58\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.684228 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c24b8-fca2-49c2-8f1c-a41614962b83-config-data\") pod \"heat-engine-5779d8467c-rr8wn\" (UID: \"fb6c24b8-fca2-49c2-8f1c-a41614962b83\") " pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.684310 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84x4x\" (UniqueName: \"kubernetes.io/projected/fb6c24b8-fca2-49c2-8f1c-a41614962b83-kube-api-access-84x4x\") pod \"heat-engine-5779d8467c-rr8wn\" (UID: \"fb6c24b8-fca2-49c2-8f1c-a41614962b83\") " pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.684331 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb6c24b8-fca2-49c2-8f1c-a41614962b83-config-data-custom\") pod \"heat-engine-5779d8467c-rr8wn\" (UID: \"fb6c24b8-fca2-49c2-8f1c-a41614962b83\") " pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.684361 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dd5p\" (UniqueName: \"kubernetes.io/projected/138ff508-ca7b-4291-8f0d-90ddc11770fb-kube-api-access-9dd5p\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.684414 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-public-tls-certs\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.684433 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-config-data\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.684453 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-config-data-custom\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.691706 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb6c24b8-fca2-49c2-8f1c-a41614962b83-combined-ca-bundle\") pod \"heat-engine-5779d8467c-rr8wn\" (UID: \"fb6c24b8-fca2-49c2-8f1c-a41614962b83\") " pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.694518 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-config-data\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.694825 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fb6c24b8-fca2-49c2-8f1c-a41614962b83-config-data-custom\") pod \"heat-engine-5779d8467c-rr8wn\" (UID: \"fb6c24b8-fca2-49c2-8f1c-a41614962b83\") " pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.694896 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-internal-tls-certs\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.695127 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-combined-ca-bundle\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.695316 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb6c24b8-fca2-49c2-8f1c-a41614962b83-config-data\") pod \"heat-engine-5779d8467c-rr8wn\" (UID: \"fb6c24b8-fca2-49c2-8f1c-a41614962b83\") " pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.696091 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-combined-ca-bundle\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.696246 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-config-data-custom\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.696414 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-public-tls-certs\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.697950 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-config-data-custom\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.698714 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be959889-fe35-4de3-b7b2-82df67812b7d-public-tls-certs\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.699654 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-config-data\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.704889 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlf58\" (UniqueName: \"kubernetes.io/projected/be959889-fe35-4de3-b7b2-82df67812b7d-kube-api-access-hlf58\") pod \"heat-api-5b65db8df4-nckpl\" (UID: \"be959889-fe35-4de3-b7b2-82df67812b7d\") " pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.705063 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84x4x\" (UniqueName: \"kubernetes.io/projected/fb6c24b8-fca2-49c2-8f1c-a41614962b83-kube-api-access-84x4x\") pod \"heat-engine-5779d8467c-rr8wn\" (UID: \"fb6c24b8-fca2-49c2-8f1c-a41614962b83\") " pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.707335 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/138ff508-ca7b-4291-8f0d-90ddc11770fb-internal-tls-certs\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.708147 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dd5p\" (UniqueName: \"kubernetes.io/projected/138ff508-ca7b-4291-8f0d-90ddc11770fb-kube-api-access-9dd5p\") pod \"heat-cfnapi-698cc5cc6c-gmw7p\" (UID: \"138ff508-ca7b-4291-8f0d-90ddc11770fb\") " pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.754880 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.774941 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:33 crc kubenswrapper[4832]: I1002 18:47:33.822346 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.345454 4832 generic.go:334] "Generic (PLEG): container finished" podID="5ddcf4a8-ee94-4877-b399-19a6f872a0c5" containerID="aac2528547a2f318bc06a9702efcff8c560c535273a1c7d26e02e3a5eebba47a" exitCode=0 Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.345702 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" event={"ID":"5ddcf4a8-ee94-4877-b399-19a6f872a0c5","Type":"ContainerDied","Data":"aac2528547a2f318bc06a9702efcff8c560c535273a1c7d26e02e3a5eebba47a"} Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.382125 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.482638 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5779d8467c-rr8wn"] Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.506753 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-openstack-edpm-ipam\") pod \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.506807 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-ovsdbserver-sb\") pod \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.506838 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jbtv\" (UniqueName: \"kubernetes.io/projected/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-kube-api-access-6jbtv\") pod \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.506863 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-dns-svc\") pod \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.506991 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-config\") pod \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.507012 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-ovsdbserver-nb\") pod \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.507040 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-dns-swift-storage-0\") pod \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\" (UID: \"5ddcf4a8-ee94-4877-b399-19a6f872a0c5\") " Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.512330 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-kube-api-access-6jbtv" (OuterVolumeSpecName: "kube-api-access-6jbtv") pod "5ddcf4a8-ee94-4877-b399-19a6f872a0c5" (UID: "5ddcf4a8-ee94-4877-b399-19a6f872a0c5"). InnerVolumeSpecName "kube-api-access-6jbtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.570963 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5ddcf4a8-ee94-4877-b399-19a6f872a0c5" (UID: "5ddcf4a8-ee94-4877-b399-19a6f872a0c5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.581255 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5ddcf4a8-ee94-4877-b399-19a6f872a0c5" (UID: "5ddcf4a8-ee94-4877-b399-19a6f872a0c5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.581814 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ddcf4a8-ee94-4877-b399-19a6f872a0c5" (UID: "5ddcf4a8-ee94-4877-b399-19a6f872a0c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.587817 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-config" (OuterVolumeSpecName: "config") pod "5ddcf4a8-ee94-4877-b399-19a6f872a0c5" (UID: "5ddcf4a8-ee94-4877-b399-19a6f872a0c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.594270 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5ddcf4a8-ee94-4877-b399-19a6f872a0c5" (UID: "5ddcf4a8-ee94-4877-b399-19a6f872a0c5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.606192 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5ddcf4a8-ee94-4877-b399-19a6f872a0c5" (UID: "5ddcf4a8-ee94-4877-b399-19a6f872a0c5"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.613356 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.613386 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.613396 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jbtv\" (UniqueName: \"kubernetes.io/projected/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-kube-api-access-6jbtv\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.613407 4832 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.613416 4832 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-config\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.613426 4832 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.613434 4832 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5ddcf4a8-ee94-4877-b399-19a6f872a0c5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:34 crc kubenswrapper[4832]: W1002 18:47:34.630806 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe959889_fe35_4de3_b7b2_82df67812b7d.slice/crio-78825a78953ddf8006e9e7b306f19b8b93b51619e1632f9be94a8cbd7da73633 WatchSource:0}: Error finding container 78825a78953ddf8006e9e7b306f19b8b93b51619e1632f9be94a8cbd7da73633: Status 404 returned error can't find the container with id 78825a78953ddf8006e9e7b306f19b8b93b51619e1632f9be94a8cbd7da73633 Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.657234 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5b65db8df4-nckpl"] Oct 02 18:47:34 crc kubenswrapper[4832]: I1002 18:47:34.885610 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-698cc5cc6c-gmw7p"] Oct 02 18:47:34 crc kubenswrapper[4832]: W1002 18:47:34.888044 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod138ff508_ca7b_4291_8f0d_90ddc11770fb.slice/crio-7a334c6269c233587a77d47711340af802ae4795868087b406eb50f7cb1508d1 WatchSource:0}: Error finding container 7a334c6269c233587a77d47711340af802ae4795868087b406eb50f7cb1508d1: Status 404 returned error can't find the container with id 7a334c6269c233587a77d47711340af802ae4795868087b406eb50f7cb1508d1 Oct 02 18:47:35 crc kubenswrapper[4832]: I1002 18:47:35.359373 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5779d8467c-rr8wn" event={"ID":"fb6c24b8-fca2-49c2-8f1c-a41614962b83","Type":"ContainerStarted","Data":"c9834899dd90710e3de9d821e0b96e71571ac31643d21265f389f14bb6d50988"} Oct 02 18:47:35 crc kubenswrapper[4832]: I1002 18:47:35.359644 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5779d8467c-rr8wn" event={"ID":"fb6c24b8-fca2-49c2-8f1c-a41614962b83","Type":"ContainerStarted","Data":"87a2ed70421ce76dc7d02f5b81d3067391faae96cb1f098701cb023fbc1a3164"} Oct 02 18:47:35 crc kubenswrapper[4832]: I1002 18:47:35.359662 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:35 crc kubenswrapper[4832]: I1002 18:47:35.370703 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b65db8df4-nckpl" event={"ID":"be959889-fe35-4de3-b7b2-82df67812b7d","Type":"ContainerStarted","Data":"78825a78953ddf8006e9e7b306f19b8b93b51619e1632f9be94a8cbd7da73633"} Oct 02 18:47:35 crc kubenswrapper[4832]: I1002 18:47:35.377780 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" event={"ID":"138ff508-ca7b-4291-8f0d-90ddc11770fb","Type":"ContainerStarted","Data":"7a334c6269c233587a77d47711340af802ae4795868087b406eb50f7cb1508d1"} Oct 02 18:47:35 crc kubenswrapper[4832]: I1002 18:47:35.381750 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5779d8467c-rr8wn" podStartSLOduration=2.381727238 podStartE2EDuration="2.381727238s" podCreationTimestamp="2025-10-02 18:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:47:35.375295628 +0000 UTC m=+1612.344738510" watchObservedRunningTime="2025-10-02 18:47:35.381727238 +0000 UTC m=+1612.351170120" Oct 02 18:47:35 crc kubenswrapper[4832]: I1002 18:47:35.383993 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" event={"ID":"5ddcf4a8-ee94-4877-b399-19a6f872a0c5","Type":"ContainerDied","Data":"0c7c35e7d9c437a5e23dcd3ba5cfdddd019627c4411133a74dcd809a91b1058f"} Oct 02 18:47:35 crc kubenswrapper[4832]: I1002 18:47:35.384065 4832 scope.go:117] "RemoveContainer" containerID="aac2528547a2f318bc06a9702efcff8c560c535273a1c7d26e02e3a5eebba47a" Oct 02 18:47:35 crc kubenswrapper[4832]: I1002 18:47:35.384140 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-cwwd2" Oct 02 18:47:35 crc kubenswrapper[4832]: I1002 18:47:35.415906 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-cwwd2"] Oct 02 18:47:35 crc kubenswrapper[4832]: I1002 18:47:35.419137 4832 scope.go:117] "RemoveContainer" containerID="5c71055820ba1708880974235af8a408edf28fe09af93dcbd052a61f586ada72" Oct 02 18:47:35 crc kubenswrapper[4832]: I1002 18:47:35.436040 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-cwwd2"] Oct 02 18:47:36 crc kubenswrapper[4832]: I1002 18:47:36.316623 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 02 18:47:37 crc kubenswrapper[4832]: I1002 18:47:37.247133 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ddcf4a8-ee94-4877-b399-19a6f872a0c5" path="/var/lib/kubelet/pods/5ddcf4a8-ee94-4877-b399-19a6f872a0c5/volumes" Oct 02 18:47:38 crc kubenswrapper[4832]: I1002 18:47:38.428869 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5b65db8df4-nckpl" event={"ID":"be959889-fe35-4de3-b7b2-82df67812b7d","Type":"ContainerStarted","Data":"fdf61fbefdd81b0f484a8088dd94d0cecb2454d784d52cfdb62a53742b05e741"} Oct 02 18:47:38 crc kubenswrapper[4832]: I1002 18:47:38.429525 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:38 crc kubenswrapper[4832]: I1002 18:47:38.430869 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" event={"ID":"138ff508-ca7b-4291-8f0d-90ddc11770fb","Type":"ContainerStarted","Data":"581bb684459eaf42310dc864ce43e4948bd8cc2e1a0508aa613ea85a87a79964"} Oct 02 18:47:38 crc kubenswrapper[4832]: I1002 18:47:38.431000 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:38 crc kubenswrapper[4832]: I1002 18:47:38.435054 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8a0ac381-9d1a-4068-b5bb-350b3979485e","Type":"ContainerStarted","Data":"7571840bcaee02ab81fe7aebb66327a20c7a0eca5cdc51f8f0ad88a444911f06"} Oct 02 18:47:38 crc kubenswrapper[4832]: I1002 18:47:38.457200 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5b65db8df4-nckpl" podStartSLOduration=3.166789083 podStartE2EDuration="5.457176595s" podCreationTimestamp="2025-10-02 18:47:33 +0000 UTC" firstStartedPulling="2025-10-02 18:47:34.64262084 +0000 UTC m=+1611.612063712" lastFinishedPulling="2025-10-02 18:47:36.933008352 +0000 UTC m=+1613.902451224" observedRunningTime="2025-10-02 18:47:38.447098841 +0000 UTC m=+1615.416541713" watchObservedRunningTime="2025-10-02 18:47:38.457176595 +0000 UTC m=+1615.426619467" Oct 02 18:47:38 crc kubenswrapper[4832]: I1002 18:47:38.494416 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.241750197 podStartE2EDuration="45.49439582s" podCreationTimestamp="2025-10-02 18:46:53 +0000 UTC" firstStartedPulling="2025-10-02 18:46:54.684397035 +0000 UTC m=+1571.653839907" lastFinishedPulling="2025-10-02 18:47:36.937042648 +0000 UTC m=+1613.906485530" observedRunningTime="2025-10-02 18:47:38.483592745 +0000 UTC m=+1615.453035627" watchObservedRunningTime="2025-10-02 18:47:38.49439582 +0000 UTC m=+1615.463838702" Oct 02 18:47:45 crc kubenswrapper[4832]: I1002 18:47:45.621675 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5b65db8df4-nckpl" Oct 02 18:47:45 crc kubenswrapper[4832]: I1002 18:47:45.643237 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" podStartSLOduration=10.598599366 podStartE2EDuration="12.643223467s" podCreationTimestamp="2025-10-02 18:47:33 +0000 UTC" firstStartedPulling="2025-10-02 18:47:34.890107734 +0000 UTC m=+1611.859550596" lastFinishedPulling="2025-10-02 18:47:36.934731805 +0000 UTC m=+1613.904174697" observedRunningTime="2025-10-02 18:47:38.524576967 +0000 UTC m=+1615.494019849" watchObservedRunningTime="2025-10-02 18:47:45.643223467 +0000 UTC m=+1622.612666339" Oct 02 18:47:45 crc kubenswrapper[4832]: I1002 18:47:45.685294 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-55b645bf4f-f4w5l"] Oct 02 18:47:45 crc kubenswrapper[4832]: I1002 18:47:45.691391 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-55b645bf4f-f4w5l" podUID="43d18e47-9b7e-43e2-be43-f0ea46363395" containerName="heat-api" containerID="cri-o://962f9bcfe66ab54156048f85ec0f78cf786098e0034e68a616e64dd5fd157f03" gracePeriod=60 Oct 02 18:47:45 crc kubenswrapper[4832]: I1002 18:47:45.781082 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-698cc5cc6c-gmw7p" Oct 02 18:47:45 crc kubenswrapper[4832]: I1002 18:47:45.853466 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5c9b5ccf5d-z74b9"] Oct 02 18:47:45 crc kubenswrapper[4832]: I1002 18:47:45.853801 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" podUID="47747b14-f01e-4098-b420-c8c046a4c97b" containerName="heat-cfnapi" containerID="cri-o://7500001a722414de123d7f2a94f13d6171be72038878a977964e5d8664c70b56" gracePeriod=60 Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.718439 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch"] Oct 02 18:47:48 crc kubenswrapper[4832]: E1002 18:47:48.719406 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddcf4a8-ee94-4877-b399-19a6f872a0c5" containerName="init" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.719418 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddcf4a8-ee94-4877-b399-19a6f872a0c5" containerName="init" Oct 02 18:47:48 crc kubenswrapper[4832]: E1002 18:47:48.719466 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ddcf4a8-ee94-4877-b399-19a6f872a0c5" containerName="dnsmasq-dns" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.719473 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ddcf4a8-ee94-4877-b399-19a6f872a0c5" containerName="dnsmasq-dns" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.719680 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ddcf4a8-ee94-4877-b399-19a6f872a0c5" containerName="dnsmasq-dns" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.720505 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.723322 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.723820 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.724016 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.724040 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.738420 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch"] Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.834991 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.835242 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.835369 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mgrc\" (UniqueName: \"kubernetes.io/projected/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-kube-api-access-6mgrc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.835525 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.916559 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-55b645bf4f-f4w5l" podUID="43d18e47-9b7e-43e2-be43-f0ea46363395" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.213:8004/healthcheck\": read tcp 10.217.0.2:42340->10.217.0.213:8004: read: connection reset by peer" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.938369 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.938532 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.938591 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mgrc\" (UniqueName: \"kubernetes.io/projected/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-kube-api-access-6mgrc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.938693 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.944953 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.946227 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.957601 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:47:48 crc kubenswrapper[4832]: I1002 18:47:48.959090 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mgrc\" (UniqueName: \"kubernetes.io/projected/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-kube-api-access-6mgrc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.044928 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.208560 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" podUID="47747b14-f01e-4098-b420-c8c046a4c97b" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.214:8000/healthcheck\": read tcp 10.217.0.2:42496->10.217.0.214:8000: read: connection reset by peer" Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.597755 4832 generic.go:334] "Generic (PLEG): container finished" podID="47747b14-f01e-4098-b420-c8c046a4c97b" containerID="7500001a722414de123d7f2a94f13d6171be72038878a977964e5d8664c70b56" exitCode=0 Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.598368 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" event={"ID":"47747b14-f01e-4098-b420-c8c046a4c97b","Type":"ContainerDied","Data":"7500001a722414de123d7f2a94f13d6171be72038878a977964e5d8664c70b56"} Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.605031 4832 generic.go:334] "Generic (PLEG): container finished" podID="43d18e47-9b7e-43e2-be43-f0ea46363395" containerID="962f9bcfe66ab54156048f85ec0f78cf786098e0034e68a616e64dd5fd157f03" exitCode=0 Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.605083 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-55b645bf4f-f4w5l" event={"ID":"43d18e47-9b7e-43e2-be43-f0ea46363395","Type":"ContainerDied","Data":"962f9bcfe66ab54156048f85ec0f78cf786098e0034e68a616e64dd5fd157f03"} Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.746187 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.863077 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-combined-ca-bundle\") pod \"43d18e47-9b7e-43e2-be43-f0ea46363395\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.863165 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-config-data\") pod \"43d18e47-9b7e-43e2-be43-f0ea46363395\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.863224 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-internal-tls-certs\") pod \"43d18e47-9b7e-43e2-be43-f0ea46363395\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.863346 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-public-tls-certs\") pod \"43d18e47-9b7e-43e2-be43-f0ea46363395\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.863454 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-config-data-custom\") pod \"43d18e47-9b7e-43e2-be43-f0ea46363395\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.863482 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqqqz\" (UniqueName: \"kubernetes.io/projected/43d18e47-9b7e-43e2-be43-f0ea46363395-kube-api-access-sqqqz\") pod \"43d18e47-9b7e-43e2-be43-f0ea46363395\" (UID: \"43d18e47-9b7e-43e2-be43-f0ea46363395\") " Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.873183 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43d18e47-9b7e-43e2-be43-f0ea46363395-kube-api-access-sqqqz" (OuterVolumeSpecName: "kube-api-access-sqqqz") pod "43d18e47-9b7e-43e2-be43-f0ea46363395" (UID: "43d18e47-9b7e-43e2-be43-f0ea46363395"). InnerVolumeSpecName "kube-api-access-sqqqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.881418 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "43d18e47-9b7e-43e2-be43-f0ea46363395" (UID: "43d18e47-9b7e-43e2-be43-f0ea46363395"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.977717 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "43d18e47-9b7e-43e2-be43-f0ea46363395" (UID: "43d18e47-9b7e-43e2-be43-f0ea46363395"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.987090 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.987121 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.987130 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqqqz\" (UniqueName: \"kubernetes.io/projected/43d18e47-9b7e-43e2-be43-f0ea46363395-kube-api-access-sqqqz\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:49 crc kubenswrapper[4832]: I1002 18:47:49.993422 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43d18e47-9b7e-43e2-be43-f0ea46363395" (UID: "43d18e47-9b7e-43e2-be43-f0ea46363395"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.023600 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-config-data" (OuterVolumeSpecName: "config-data") pod "43d18e47-9b7e-43e2-be43-f0ea46363395" (UID: "43d18e47-9b7e-43e2-be43-f0ea46363395"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.090149 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.090395 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.092286 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "43d18e47-9b7e-43e2-be43-f0ea46363395" (UID: "43d18e47-9b7e-43e2-be43-f0ea46363395"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.192516 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d18e47-9b7e-43e2-be43-f0ea46363395-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.230674 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.395885 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-config-data-custom\") pod \"47747b14-f01e-4098-b420-c8c046a4c97b\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.395972 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-config-data\") pod \"47747b14-f01e-4098-b420-c8c046a4c97b\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.395992 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85cdp\" (UniqueName: \"kubernetes.io/projected/47747b14-f01e-4098-b420-c8c046a4c97b-kube-api-access-85cdp\") pod \"47747b14-f01e-4098-b420-c8c046a4c97b\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.396099 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-internal-tls-certs\") pod \"47747b14-f01e-4098-b420-c8c046a4c97b\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.396255 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-combined-ca-bundle\") pod \"47747b14-f01e-4098-b420-c8c046a4c97b\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.396355 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-public-tls-certs\") pod \"47747b14-f01e-4098-b420-c8c046a4c97b\" (UID: \"47747b14-f01e-4098-b420-c8c046a4c97b\") " Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.410574 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "47747b14-f01e-4098-b420-c8c046a4c97b" (UID: "47747b14-f01e-4098-b420-c8c046a4c97b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.419440 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47747b14-f01e-4098-b420-c8c046a4c97b-kube-api-access-85cdp" (OuterVolumeSpecName: "kube-api-access-85cdp") pod "47747b14-f01e-4098-b420-c8c046a4c97b" (UID: "47747b14-f01e-4098-b420-c8c046a4c97b"). InnerVolumeSpecName "kube-api-access-85cdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.435708 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch"] Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.454484 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47747b14-f01e-4098-b420-c8c046a4c97b" (UID: "47747b14-f01e-4098-b420-c8c046a4c97b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.476944 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-config-data" (OuterVolumeSpecName: "config-data") pod "47747b14-f01e-4098-b420-c8c046a4c97b" (UID: "47747b14-f01e-4098-b420-c8c046a4c97b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.478001 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "47747b14-f01e-4098-b420-c8c046a4c97b" (UID: "47747b14-f01e-4098-b420-c8c046a4c97b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.496450 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "47747b14-f01e-4098-b420-c8c046a4c97b" (UID: "47747b14-f01e-4098-b420-c8c046a4c97b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.499529 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.499560 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.499570 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.499580 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.499590 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85cdp\" (UniqueName: \"kubernetes.io/projected/47747b14-f01e-4098-b420-c8c046a4c97b-kube-api-access-85cdp\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.499599 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47747b14-f01e-4098-b420-c8c046a4c97b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.617684 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" event={"ID":"4f9739db-9008-4848-bbc0-ddaa4da9c9b8","Type":"ContainerStarted","Data":"48053caebdab8276d3d54e13f1a3f2cb9a93ed2abb318bc88d903af32b4fda75"} Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.619530 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.619536 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c9b5ccf5d-z74b9" event={"ID":"47747b14-f01e-4098-b420-c8c046a4c97b","Type":"ContainerDied","Data":"65c9d6f892c43a1005a4e6b40c515df53cedc9b8d1d9ab3fd0a1ded2e8b6b30d"} Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.619832 4832 scope.go:117] "RemoveContainer" containerID="7500001a722414de123d7f2a94f13d6171be72038878a977964e5d8664c70b56" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.622648 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-55b645bf4f-f4w5l" event={"ID":"43d18e47-9b7e-43e2-be43-f0ea46363395","Type":"ContainerDied","Data":"1a2ba0c29e4f106af5769d0939ef0912a2a38b323d5f3098f26817b4605b07aa"} Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.622707 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-55b645bf4f-f4w5l" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.651739 4832 scope.go:117] "RemoveContainer" containerID="962f9bcfe66ab54156048f85ec0f78cf786098e0034e68a616e64dd5fd157f03" Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.659806 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5c9b5ccf5d-z74b9"] Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.672125 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5c9b5ccf5d-z74b9"] Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.682124 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-55b645bf4f-f4w5l"] Oct 02 18:47:50 crc kubenswrapper[4832]: I1002 18:47:50.694877 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-55b645bf4f-f4w5l"] Oct 02 18:47:51 crc kubenswrapper[4832]: I1002 18:47:51.237442 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43d18e47-9b7e-43e2-be43-f0ea46363395" path="/var/lib/kubelet/pods/43d18e47-9b7e-43e2-be43-f0ea46363395/volumes" Oct 02 18:47:51 crc kubenswrapper[4832]: I1002 18:47:51.238341 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47747b14-f01e-4098-b420-c8c046a4c97b" path="/var/lib/kubelet/pods/47747b14-f01e-4098-b420-c8c046a4c97b/volumes" Oct 02 18:47:51 crc kubenswrapper[4832]: I1002 18:47:51.635636 4832 generic.go:334] "Generic (PLEG): container finished" podID="9ab42783-2e22-4b2f-9fab-be96ba65e345" containerID="bd78c088e632208edba6e847026a5205984036828b1c89ec75081694b27472cc" exitCode=0 Oct 02 18:47:51 crc kubenswrapper[4832]: I1002 18:47:51.635734 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9ab42783-2e22-4b2f-9fab-be96ba65e345","Type":"ContainerDied","Data":"bd78c088e632208edba6e847026a5205984036828b1c89ec75081694b27472cc"} Oct 02 18:47:51 crc kubenswrapper[4832]: I1002 18:47:51.641195 4832 generic.go:334] "Generic (PLEG): container finished" podID="c87efd10-3959-4dfa-ab6a-88810fe9a0fa" containerID="2162ca5b8e0638f6e5c75ca0a3e4465bca593de4dea4ef8029f9c347739be274" exitCode=0 Oct 02 18:47:51 crc kubenswrapper[4832]: I1002 18:47:51.641291 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c87efd10-3959-4dfa-ab6a-88810fe9a0fa","Type":"ContainerDied","Data":"2162ca5b8e0638f6e5c75ca0a3e4465bca593de4dea4ef8029f9c347739be274"} Oct 02 18:47:52 crc kubenswrapper[4832]: I1002 18:47:52.656962 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9ab42783-2e22-4b2f-9fab-be96ba65e345","Type":"ContainerStarted","Data":"22ce773d4878797f3c4e50f466ee5be12a5aedc33f9fb941a20797e19dbde159"} Oct 02 18:47:52 crc kubenswrapper[4832]: I1002 18:47:52.657418 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 02 18:47:52 crc kubenswrapper[4832]: I1002 18:47:52.660239 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c87efd10-3959-4dfa-ab6a-88810fe9a0fa","Type":"ContainerStarted","Data":"bdbc392bf304eea550c72c5ac12ccf2167f384998d3cf295024888b4c8fa1475"} Oct 02 18:47:52 crc kubenswrapper[4832]: I1002 18:47:52.660465 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:47:52 crc kubenswrapper[4832]: I1002 18:47:52.692529 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.692510484 podStartE2EDuration="37.692510484s" podCreationTimestamp="2025-10-02 18:47:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:47:52.68465996 +0000 UTC m=+1629.654102832" watchObservedRunningTime="2025-10-02 18:47:52.692510484 +0000 UTC m=+1629.661953356" Oct 02 18:47:52 crc kubenswrapper[4832]: I1002 18:47:52.730729 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.730712731 podStartE2EDuration="37.730712731s" podCreationTimestamp="2025-10-02 18:47:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 18:47:52.729305517 +0000 UTC m=+1629.698748389" watchObservedRunningTime="2025-10-02 18:47:52.730712731 +0000 UTC m=+1629.700155593" Oct 02 18:47:53 crc kubenswrapper[4832]: I1002 18:47:53.802638 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5779d8467c-rr8wn" Oct 02 18:47:53 crc kubenswrapper[4832]: I1002 18:47:53.867485 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-86bf6cf48b-jmwqc"] Oct 02 18:47:53 crc kubenswrapper[4832]: I1002 18:47:53.867746 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-86bf6cf48b-jmwqc" podUID="641dee8e-15c6-4d4e-9e8f-8cf62fc353a5" containerName="heat-engine" containerID="cri-o://9cc7f7984a9559af186b58e3acdaadd35ec7302ebdd6806e68de783f54d8de4c" gracePeriod=60 Oct 02 18:47:56 crc kubenswrapper[4832]: E1002 18:47:56.745487 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9cc7f7984a9559af186b58e3acdaadd35ec7302ebdd6806e68de783f54d8de4c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 02 18:47:56 crc kubenswrapper[4832]: E1002 18:47:56.748919 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9cc7f7984a9559af186b58e3acdaadd35ec7302ebdd6806e68de783f54d8de4c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 02 18:47:56 crc kubenswrapper[4832]: E1002 18:47:56.750461 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9cc7f7984a9559af186b58e3acdaadd35ec7302ebdd6806e68de783f54d8de4c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 02 18:47:56 crc kubenswrapper[4832]: E1002 18:47:56.750503 4832 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-86bf6cf48b-jmwqc" podUID="641dee8e-15c6-4d4e-9e8f-8cf62fc353a5" containerName="heat-engine" Oct 02 18:47:56 crc kubenswrapper[4832]: I1002 18:47:56.875793 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:47:56 crc kubenswrapper[4832]: I1002 18:47:56.875852 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.477335 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-6zkpx"] Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.526223 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-6zkpx"] Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.541439 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-pjwt8"] Oct 02 18:47:59 crc kubenswrapper[4832]: E1002 18:47:59.541984 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47747b14-f01e-4098-b420-c8c046a4c97b" containerName="heat-cfnapi" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.542012 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="47747b14-f01e-4098-b420-c8c046a4c97b" containerName="heat-cfnapi" Oct 02 18:47:59 crc kubenswrapper[4832]: E1002 18:47:59.542033 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43d18e47-9b7e-43e2-be43-f0ea46363395" containerName="heat-api" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.542040 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="43d18e47-9b7e-43e2-be43-f0ea46363395" containerName="heat-api" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.542248 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="43d18e47-9b7e-43e2-be43-f0ea46363395" containerName="heat-api" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.542292 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="47747b14-f01e-4098-b420-c8c046a4c97b" containerName="heat-cfnapi" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.543096 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.553866 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-pjwt8"] Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.628194 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmrsl\" (UniqueName: \"kubernetes.io/projected/6150c26f-2bc7-4e66-84ef-b7241196ee1f-kube-api-access-cmrsl\") pod \"aodh-db-sync-pjwt8\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.628286 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-scripts\") pod \"aodh-db-sync-pjwt8\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.628569 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-config-data\") pod \"aodh-db-sync-pjwt8\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.628776 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-combined-ca-bundle\") pod \"aodh-db-sync-pjwt8\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.733497 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-combined-ca-bundle\") pod \"aodh-db-sync-pjwt8\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.733643 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmrsl\" (UniqueName: \"kubernetes.io/projected/6150c26f-2bc7-4e66-84ef-b7241196ee1f-kube-api-access-cmrsl\") pod \"aodh-db-sync-pjwt8\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.733693 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-scripts\") pod \"aodh-db-sync-pjwt8\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.733759 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-config-data\") pod \"aodh-db-sync-pjwt8\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.751786 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-config-data\") pod \"aodh-db-sync-pjwt8\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.752352 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-combined-ca-bundle\") pod \"aodh-db-sync-pjwt8\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.762013 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-scripts\") pod \"aodh-db-sync-pjwt8\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.763011 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmrsl\" (UniqueName: \"kubernetes.io/projected/6150c26f-2bc7-4e66-84ef-b7241196ee1f-kube-api-access-cmrsl\") pod \"aodh-db-sync-pjwt8\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:47:59 crc kubenswrapper[4832]: I1002 18:47:59.877500 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:48:01 crc kubenswrapper[4832]: I1002 18:48:01.238189 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c3da35e-519a-404c-91d1-5ca7f0071d2e" path="/var/lib/kubelet/pods/3c3da35e-519a-404c-91d1-5ca7f0071d2e/volumes" Oct 02 18:48:04 crc kubenswrapper[4832]: I1002 18:48:04.324658 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-pjwt8"] Oct 02 18:48:04 crc kubenswrapper[4832]: I1002 18:48:04.879396 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pjwt8" event={"ID":"6150c26f-2bc7-4e66-84ef-b7241196ee1f","Type":"ContainerStarted","Data":"bbc46c43ea75841884cd3db9c50ccf6c0078eeb33d31678ba427c22ba305ffe7"} Oct 02 18:48:05 crc kubenswrapper[4832]: I1002 18:48:05.780418 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="9ab42783-2e22-4b2f-9fab-be96ba65e345" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.10:5671: connect: connection refused" Oct 02 18:48:05 crc kubenswrapper[4832]: I1002 18:48:05.794215 4832 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="c87efd10-3959-4dfa-ab6a-88810fe9a0fa" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.11:5671: connect: connection refused" Oct 02 18:48:06 crc kubenswrapper[4832]: E1002 18:48:06.746217 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9cc7f7984a9559af186b58e3acdaadd35ec7302ebdd6806e68de783f54d8de4c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 02 18:48:06 crc kubenswrapper[4832]: E1002 18:48:06.747685 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9cc7f7984a9559af186b58e3acdaadd35ec7302ebdd6806e68de783f54d8de4c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 02 18:48:06 crc kubenswrapper[4832]: E1002 18:48:06.748902 4832 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9cc7f7984a9559af186b58e3acdaadd35ec7302ebdd6806e68de783f54d8de4c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Oct 02 18:48:06 crc kubenswrapper[4832]: E1002 18:48:06.748931 4832 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-86bf6cf48b-jmwqc" podUID="641dee8e-15c6-4d4e-9e8f-8cf62fc353a5" containerName="heat-engine" Oct 02 18:48:07 crc kubenswrapper[4832]: I1002 18:48:07.926937 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" event={"ID":"4f9739db-9008-4848-bbc0-ddaa4da9c9b8","Type":"ContainerStarted","Data":"40d5b241615733219e3bbd747c54d5a0410acc7d2ebec9476a3df86e7dfe7da6"} Oct 02 18:48:07 crc kubenswrapper[4832]: I1002 18:48:07.959822 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" podStartSLOduration=3.927786852 podStartE2EDuration="19.959771273s" podCreationTimestamp="2025-10-02 18:47:48 +0000 UTC" firstStartedPulling="2025-10-02 18:47:50.42587987 +0000 UTC m=+1627.395322742" lastFinishedPulling="2025-10-02 18:48:06.457864291 +0000 UTC m=+1643.427307163" observedRunningTime="2025-10-02 18:48:07.947538974 +0000 UTC m=+1644.916981836" watchObservedRunningTime="2025-10-02 18:48:07.959771273 +0000 UTC m=+1644.929214145" Oct 02 18:48:09 crc kubenswrapper[4832]: I1002 18:48:09.951778 4832 generic.go:334] "Generic (PLEG): container finished" podID="641dee8e-15c6-4d4e-9e8f-8cf62fc353a5" containerID="9cc7f7984a9559af186b58e3acdaadd35ec7302ebdd6806e68de783f54d8de4c" exitCode=0 Oct 02 18:48:09 crc kubenswrapper[4832]: I1002 18:48:09.951863 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-86bf6cf48b-jmwqc" event={"ID":"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5","Type":"ContainerDied","Data":"9cc7f7984a9559af186b58e3acdaadd35ec7302ebdd6806e68de783f54d8de4c"} Oct 02 18:48:10 crc kubenswrapper[4832]: I1002 18:48:10.964622 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pjwt8" event={"ID":"6150c26f-2bc7-4e66-84ef-b7241196ee1f","Type":"ContainerStarted","Data":"72df4d7bd201709fde9f768fa700f6f8c64587c226c3c8c5fce6c3ffbb767de2"} Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.001320 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-pjwt8" podStartSLOduration=5.735122604 podStartE2EDuration="12.001234225s" podCreationTimestamp="2025-10-02 18:47:59 +0000 UTC" firstStartedPulling="2025-10-02 18:48:04.319035515 +0000 UTC m=+1641.288478387" lastFinishedPulling="2025-10-02 18:48:10.585147136 +0000 UTC m=+1647.554590008" observedRunningTime="2025-10-02 18:48:10.985872198 +0000 UTC m=+1647.955315080" watchObservedRunningTime="2025-10-02 18:48:11.001234225 +0000 UTC m=+1647.970677117" Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.024036 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.182663 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-config-data-custom\") pod \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.182964 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-config-data\") pod \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.183004 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-combined-ca-bundle\") pod \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.183034 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvr6m\" (UniqueName: \"kubernetes.io/projected/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-kube-api-access-rvr6m\") pod \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\" (UID: \"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5\") " Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.188633 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-kube-api-access-rvr6m" (OuterVolumeSpecName: "kube-api-access-rvr6m") pod "641dee8e-15c6-4d4e-9e8f-8cf62fc353a5" (UID: "641dee8e-15c6-4d4e-9e8f-8cf62fc353a5"). InnerVolumeSpecName "kube-api-access-rvr6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.188739 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "641dee8e-15c6-4d4e-9e8f-8cf62fc353a5" (UID: "641dee8e-15c6-4d4e-9e8f-8cf62fc353a5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.213609 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "641dee8e-15c6-4d4e-9e8f-8cf62fc353a5" (UID: "641dee8e-15c6-4d4e-9e8f-8cf62fc353a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.242487 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-config-data" (OuterVolumeSpecName: "config-data") pod "641dee8e-15c6-4d4e-9e8f-8cf62fc353a5" (UID: "641dee8e-15c6-4d4e-9e8f-8cf62fc353a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.285741 4832 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.285773 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.285782 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.285791 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvr6m\" (UniqueName: \"kubernetes.io/projected/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5-kube-api-access-rvr6m\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.980234 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-86bf6cf48b-jmwqc" event={"ID":"641dee8e-15c6-4d4e-9e8f-8cf62fc353a5","Type":"ContainerDied","Data":"2c543c81c2051495a298a0c7e1b84240d0fb276b59f29d33d61da3fa10e5c318"} Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.980300 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-86bf6cf48b-jmwqc" Oct 02 18:48:11 crc kubenswrapper[4832]: I1002 18:48:11.981893 4832 scope.go:117] "RemoveContainer" containerID="9cc7f7984a9559af186b58e3acdaadd35ec7302ebdd6806e68de783f54d8de4c" Oct 02 18:48:12 crc kubenswrapper[4832]: I1002 18:48:12.046967 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-86bf6cf48b-jmwqc"] Oct 02 18:48:12 crc kubenswrapper[4832]: I1002 18:48:12.061421 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-86bf6cf48b-jmwqc"] Oct 02 18:48:13 crc kubenswrapper[4832]: I1002 18:48:13.238957 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="641dee8e-15c6-4d4e-9e8f-8cf62fc353a5" path="/var/lib/kubelet/pods/641dee8e-15c6-4d4e-9e8f-8cf62fc353a5/volumes" Oct 02 18:48:14 crc kubenswrapper[4832]: I1002 18:48:14.018213 4832 generic.go:334] "Generic (PLEG): container finished" podID="6150c26f-2bc7-4e66-84ef-b7241196ee1f" containerID="72df4d7bd201709fde9f768fa700f6f8c64587c226c3c8c5fce6c3ffbb767de2" exitCode=0 Oct 02 18:48:14 crc kubenswrapper[4832]: I1002 18:48:14.018280 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pjwt8" event={"ID":"6150c26f-2bc7-4e66-84ef-b7241196ee1f","Type":"ContainerDied","Data":"72df4d7bd201709fde9f768fa700f6f8c64587c226c3c8c5fce6c3ffbb767de2"} Oct 02 18:48:14 crc kubenswrapper[4832]: I1002 18:48:14.881971 4832 scope.go:117] "RemoveContainer" containerID="02b35e9bd924e57a9787f479171ed3c55be531cce04acad93ea397e0ab01912e" Oct 02 18:48:15 crc kubenswrapper[4832]: I1002 18:48:15.670401 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:48:15 crc kubenswrapper[4832]: I1002 18:48:15.781565 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 02 18:48:15 crc kubenswrapper[4832]: I1002 18:48:15.794081 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 02 18:48:15 crc kubenswrapper[4832]: I1002 18:48:15.799926 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmrsl\" (UniqueName: \"kubernetes.io/projected/6150c26f-2bc7-4e66-84ef-b7241196ee1f-kube-api-access-cmrsl\") pod \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " Oct 02 18:48:15 crc kubenswrapper[4832]: I1002 18:48:15.800049 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-config-data\") pod \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " Oct 02 18:48:15 crc kubenswrapper[4832]: I1002 18:48:15.800090 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-combined-ca-bundle\") pod \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " Oct 02 18:48:15 crc kubenswrapper[4832]: I1002 18:48:15.800393 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-scripts\") pod \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\" (UID: \"6150c26f-2bc7-4e66-84ef-b7241196ee1f\") " Oct 02 18:48:15 crc kubenswrapper[4832]: I1002 18:48:15.806316 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6150c26f-2bc7-4e66-84ef-b7241196ee1f-kube-api-access-cmrsl" (OuterVolumeSpecName: "kube-api-access-cmrsl") pod "6150c26f-2bc7-4e66-84ef-b7241196ee1f" (UID: "6150c26f-2bc7-4e66-84ef-b7241196ee1f"). InnerVolumeSpecName "kube-api-access-cmrsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:48:15 crc kubenswrapper[4832]: I1002 18:48:15.816222 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-scripts" (OuterVolumeSpecName: "scripts") pod "6150c26f-2bc7-4e66-84ef-b7241196ee1f" (UID: "6150c26f-2bc7-4e66-84ef-b7241196ee1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:15 crc kubenswrapper[4832]: I1002 18:48:15.895493 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-config-data" (OuterVolumeSpecName: "config-data") pod "6150c26f-2bc7-4e66-84ef-b7241196ee1f" (UID: "6150c26f-2bc7-4e66-84ef-b7241196ee1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:15 crc kubenswrapper[4832]: I1002 18:48:15.904862 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmrsl\" (UniqueName: \"kubernetes.io/projected/6150c26f-2bc7-4e66-84ef-b7241196ee1f-kube-api-access-cmrsl\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:15 crc kubenswrapper[4832]: I1002 18:48:15.904892 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:15 crc kubenswrapper[4832]: I1002 18:48:15.904901 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:15 crc kubenswrapper[4832]: I1002 18:48:15.916510 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6150c26f-2bc7-4e66-84ef-b7241196ee1f" (UID: "6150c26f-2bc7-4e66-84ef-b7241196ee1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:16 crc kubenswrapper[4832]: I1002 18:48:16.011412 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6150c26f-2bc7-4e66-84ef-b7241196ee1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:16 crc kubenswrapper[4832]: I1002 18:48:16.057694 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pjwt8" event={"ID":"6150c26f-2bc7-4e66-84ef-b7241196ee1f","Type":"ContainerDied","Data":"bbc46c43ea75841884cd3db9c50ccf6c0078eeb33d31678ba427c22ba305ffe7"} Oct 02 18:48:16 crc kubenswrapper[4832]: I1002 18:48:16.057735 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbc46c43ea75841884cd3db9c50ccf6c0078eeb33d31678ba427c22ba305ffe7" Oct 02 18:48:16 crc kubenswrapper[4832]: I1002 18:48:16.057788 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pjwt8" Oct 02 18:48:19 crc kubenswrapper[4832]: I1002 18:48:19.551300 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 02 18:48:19 crc kubenswrapper[4832]: I1002 18:48:19.552390 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-listener" containerID="cri-o://a8c319a783234ef8e46e550d557694d921e9cc7d1f37f2efa7c64639efd71036" gracePeriod=30 Oct 02 18:48:19 crc kubenswrapper[4832]: I1002 18:48:19.552536 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-api" containerID="cri-o://346ffe19709bf006fe8f1548ef5133878fe707aa29eb63fcba950e877d3446f6" gracePeriod=30 Oct 02 18:48:19 crc kubenswrapper[4832]: I1002 18:48:19.552671 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-notifier" containerID="cri-o://e93da71715267eee779591d3111d9ffb8027577ad4001642a0de7ef82272ee0f" gracePeriod=30 Oct 02 18:48:19 crc kubenswrapper[4832]: I1002 18:48:19.552709 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-evaluator" containerID="cri-o://8ad03b05da840fe55c4236656bf2202fa604cf9de5ef9296fc4551bbbcbb752d" gracePeriod=30 Oct 02 18:48:20 crc kubenswrapper[4832]: I1002 18:48:20.111009 4832 generic.go:334] "Generic (PLEG): container finished" podID="4f9739db-9008-4848-bbc0-ddaa4da9c9b8" containerID="40d5b241615733219e3bbd747c54d5a0410acc7d2ebec9476a3df86e7dfe7da6" exitCode=0 Oct 02 18:48:20 crc kubenswrapper[4832]: I1002 18:48:20.111066 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" event={"ID":"4f9739db-9008-4848-bbc0-ddaa4da9c9b8","Type":"ContainerDied","Data":"40d5b241615733219e3bbd747c54d5a0410acc7d2ebec9476a3df86e7dfe7da6"} Oct 02 18:48:20 crc kubenswrapper[4832]: I1002 18:48:20.114565 4832 generic.go:334] "Generic (PLEG): container finished" podID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerID="8ad03b05da840fe55c4236656bf2202fa604cf9de5ef9296fc4551bbbcbb752d" exitCode=0 Oct 02 18:48:20 crc kubenswrapper[4832]: I1002 18:48:20.114607 4832 generic.go:334] "Generic (PLEG): container finished" podID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerID="346ffe19709bf006fe8f1548ef5133878fe707aa29eb63fcba950e877d3446f6" exitCode=0 Oct 02 18:48:20 crc kubenswrapper[4832]: I1002 18:48:20.114629 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"726d8b9a-8bc1-4cea-a65f-e494847a6b72","Type":"ContainerDied","Data":"8ad03b05da840fe55c4236656bf2202fa604cf9de5ef9296fc4551bbbcbb752d"} Oct 02 18:48:20 crc kubenswrapper[4832]: I1002 18:48:20.114676 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"726d8b9a-8bc1-4cea-a65f-e494847a6b72","Type":"ContainerDied","Data":"346ffe19709bf006fe8f1548ef5133878fe707aa29eb63fcba950e877d3446f6"} Oct 02 18:48:21 crc kubenswrapper[4832]: I1002 18:48:21.830479 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:48:21 crc kubenswrapper[4832]: I1002 18:48:21.858411 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-repo-setup-combined-ca-bundle\") pod \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " Oct 02 18:48:21 crc kubenswrapper[4832]: I1002 18:48:21.858580 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-ssh-key\") pod \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " Oct 02 18:48:21 crc kubenswrapper[4832]: I1002 18:48:21.858743 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mgrc\" (UniqueName: \"kubernetes.io/projected/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-kube-api-access-6mgrc\") pod \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " Oct 02 18:48:21 crc kubenswrapper[4832]: I1002 18:48:21.858811 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-inventory\") pod \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\" (UID: \"4f9739db-9008-4848-bbc0-ddaa4da9c9b8\") " Oct 02 18:48:21 crc kubenswrapper[4832]: I1002 18:48:21.867008 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4f9739db-9008-4848-bbc0-ddaa4da9c9b8" (UID: "4f9739db-9008-4848-bbc0-ddaa4da9c9b8"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:21 crc kubenswrapper[4832]: I1002 18:48:21.868233 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-kube-api-access-6mgrc" (OuterVolumeSpecName: "kube-api-access-6mgrc") pod "4f9739db-9008-4848-bbc0-ddaa4da9c9b8" (UID: "4f9739db-9008-4848-bbc0-ddaa4da9c9b8"). InnerVolumeSpecName "kube-api-access-6mgrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:48:21 crc kubenswrapper[4832]: I1002 18:48:21.916451 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4f9739db-9008-4848-bbc0-ddaa4da9c9b8" (UID: "4f9739db-9008-4848-bbc0-ddaa4da9c9b8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:21 crc kubenswrapper[4832]: I1002 18:48:21.924530 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-inventory" (OuterVolumeSpecName: "inventory") pod "4f9739db-9008-4848-bbc0-ddaa4da9c9b8" (UID: "4f9739db-9008-4848-bbc0-ddaa4da9c9b8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:21 crc kubenswrapper[4832]: I1002 18:48:21.960986 4832 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:21 crc kubenswrapper[4832]: I1002 18:48:21.961019 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:21 crc kubenswrapper[4832]: I1002 18:48:21.961029 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mgrc\" (UniqueName: \"kubernetes.io/projected/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-kube-api-access-6mgrc\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:21 crc kubenswrapper[4832]: I1002 18:48:21.961043 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f9739db-9008-4848-bbc0-ddaa4da9c9b8-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.206863 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" event={"ID":"4f9739db-9008-4848-bbc0-ddaa4da9c9b8","Type":"ContainerDied","Data":"48053caebdab8276d3d54e13f1a3f2cb9a93ed2abb318bc88d903af32b4fda75"} Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.206916 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48053caebdab8276d3d54e13f1a3f2cb9a93ed2abb318bc88d903af32b4fda75" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.207014 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.261356 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr"] Oct 02 18:48:22 crc kubenswrapper[4832]: E1002 18:48:22.261981 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641dee8e-15c6-4d4e-9e8f-8cf62fc353a5" containerName="heat-engine" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.262001 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="641dee8e-15c6-4d4e-9e8f-8cf62fc353a5" containerName="heat-engine" Oct 02 18:48:22 crc kubenswrapper[4832]: E1002 18:48:22.262046 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f9739db-9008-4848-bbc0-ddaa4da9c9b8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.262057 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f9739db-9008-4848-bbc0-ddaa4da9c9b8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 02 18:48:22 crc kubenswrapper[4832]: E1002 18:48:22.262073 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6150c26f-2bc7-4e66-84ef-b7241196ee1f" containerName="aodh-db-sync" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.262082 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6150c26f-2bc7-4e66-84ef-b7241196ee1f" containerName="aodh-db-sync" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.262386 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6150c26f-2bc7-4e66-84ef-b7241196ee1f" containerName="aodh-db-sync" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.262426 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f9739db-9008-4848-bbc0-ddaa4da9c9b8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.262461 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="641dee8e-15c6-4d4e-9e8f-8cf62fc353a5" containerName="heat-engine" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.263468 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.270966 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.271145 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.271721 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.271912 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.298524 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr"] Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.371164 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cf50fba-3a89-451f-adfe-f64eb401d544-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xzpxr\" (UID: \"5cf50fba-3a89-451f-adfe-f64eb401d544\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.371257 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cf50fba-3a89-451f-adfe-f64eb401d544-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xzpxr\" (UID: \"5cf50fba-3a89-451f-adfe-f64eb401d544\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.371506 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glkxv\" (UniqueName: \"kubernetes.io/projected/5cf50fba-3a89-451f-adfe-f64eb401d544-kube-api-access-glkxv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xzpxr\" (UID: \"5cf50fba-3a89-451f-adfe-f64eb401d544\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.474952 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glkxv\" (UniqueName: \"kubernetes.io/projected/5cf50fba-3a89-451f-adfe-f64eb401d544-kube-api-access-glkxv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xzpxr\" (UID: \"5cf50fba-3a89-451f-adfe-f64eb401d544\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.475220 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cf50fba-3a89-451f-adfe-f64eb401d544-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xzpxr\" (UID: \"5cf50fba-3a89-451f-adfe-f64eb401d544\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.475405 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cf50fba-3a89-451f-adfe-f64eb401d544-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xzpxr\" (UID: \"5cf50fba-3a89-451f-adfe-f64eb401d544\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.479633 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cf50fba-3a89-451f-adfe-f64eb401d544-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xzpxr\" (UID: \"5cf50fba-3a89-451f-adfe-f64eb401d544\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.481225 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cf50fba-3a89-451f-adfe-f64eb401d544-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xzpxr\" (UID: \"5cf50fba-3a89-451f-adfe-f64eb401d544\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.503956 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glkxv\" (UniqueName: \"kubernetes.io/projected/5cf50fba-3a89-451f-adfe-f64eb401d544-kube-api-access-glkxv\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xzpxr\" (UID: \"5cf50fba-3a89-451f-adfe-f64eb401d544\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" Oct 02 18:48:22 crc kubenswrapper[4832]: I1002 18:48:22.589009 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" Oct 02 18:48:23 crc kubenswrapper[4832]: W1002 18:48:23.237384 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cf50fba_3a89_451f_adfe_f64eb401d544.slice/crio-9f95bc8ccad53e388ff71f7d3b316bcb3ac242b864bf61f2759ae5d34b31e816 WatchSource:0}: Error finding container 9f95bc8ccad53e388ff71f7d3b316bcb3ac242b864bf61f2759ae5d34b31e816: Status 404 returned error can't find the container with id 9f95bc8ccad53e388ff71f7d3b316bcb3ac242b864bf61f2759ae5d34b31e816 Oct 02 18:48:23 crc kubenswrapper[4832]: I1002 18:48:23.245929 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr"] Oct 02 18:48:24 crc kubenswrapper[4832]: I1002 18:48:24.233814 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" event={"ID":"5cf50fba-3a89-451f-adfe-f64eb401d544","Type":"ContainerStarted","Data":"961e1242fa6b2b72325eeb95cddaca683409e2ac1376f7b123c95d8a7d35762b"} Oct 02 18:48:24 crc kubenswrapper[4832]: I1002 18:48:24.234650 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" event={"ID":"5cf50fba-3a89-451f-adfe-f64eb401d544","Type":"ContainerStarted","Data":"9f95bc8ccad53e388ff71f7d3b316bcb3ac242b864bf61f2759ae5d34b31e816"} Oct 02 18:48:24 crc kubenswrapper[4832]: I1002 18:48:24.276318 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" podStartSLOduration=1.723782267 podStartE2EDuration="2.27629334s" podCreationTimestamp="2025-10-02 18:48:22 +0000 UTC" firstStartedPulling="2025-10-02 18:48:23.24367947 +0000 UTC m=+1660.213122352" lastFinishedPulling="2025-10-02 18:48:23.796190513 +0000 UTC m=+1660.765633425" observedRunningTime="2025-10-02 18:48:24.260427678 +0000 UTC m=+1661.229870550" watchObservedRunningTime="2025-10-02 18:48:24.27629334 +0000 UTC m=+1661.245736222" Oct 02 18:48:25 crc kubenswrapper[4832]: I1002 18:48:25.264186 4832 generic.go:334] "Generic (PLEG): container finished" podID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerID="a8c319a783234ef8e46e550d557694d921e9cc7d1f37f2efa7c64639efd71036" exitCode=0 Oct 02 18:48:25 crc kubenswrapper[4832]: I1002 18:48:25.264793 4832 generic.go:334] "Generic (PLEG): container finished" podID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerID="e93da71715267eee779591d3111d9ffb8027577ad4001642a0de7ef82272ee0f" exitCode=0 Oct 02 18:48:25 crc kubenswrapper[4832]: I1002 18:48:25.264324 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"726d8b9a-8bc1-4cea-a65f-e494847a6b72","Type":"ContainerDied","Data":"a8c319a783234ef8e46e550d557694d921e9cc7d1f37f2efa7c64639efd71036"} Oct 02 18:48:25 crc kubenswrapper[4832]: I1002 18:48:25.264978 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"726d8b9a-8bc1-4cea-a65f-e494847a6b72","Type":"ContainerDied","Data":"e93da71715267eee779591d3111d9ffb8027577ad4001642a0de7ef82272ee0f"} Oct 02 18:48:25 crc kubenswrapper[4832]: I1002 18:48:25.874532 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 02 18:48:25 crc kubenswrapper[4832]: I1002 18:48:25.972649 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-config-data\") pod \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " Oct 02 18:48:25 crc kubenswrapper[4832]: I1002 18:48:25.972742 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggbv5\" (UniqueName: \"kubernetes.io/projected/726d8b9a-8bc1-4cea-a65f-e494847a6b72-kube-api-access-ggbv5\") pod \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " Oct 02 18:48:25 crc kubenswrapper[4832]: I1002 18:48:25.972784 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-scripts\") pod \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " Oct 02 18:48:25 crc kubenswrapper[4832]: I1002 18:48:25.972881 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-combined-ca-bundle\") pod \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " Oct 02 18:48:25 crc kubenswrapper[4832]: I1002 18:48:25.972916 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-public-tls-certs\") pod \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " Oct 02 18:48:25 crc kubenswrapper[4832]: I1002 18:48:25.973049 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-internal-tls-certs\") pod \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\" (UID: \"726d8b9a-8bc1-4cea-a65f-e494847a6b72\") " Oct 02 18:48:25 crc kubenswrapper[4832]: I1002 18:48:25.986176 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-scripts" (OuterVolumeSpecName: "scripts") pod "726d8b9a-8bc1-4cea-a65f-e494847a6b72" (UID: "726d8b9a-8bc1-4cea-a65f-e494847a6b72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:25 crc kubenswrapper[4832]: I1002 18:48:25.986721 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/726d8b9a-8bc1-4cea-a65f-e494847a6b72-kube-api-access-ggbv5" (OuterVolumeSpecName: "kube-api-access-ggbv5") pod "726d8b9a-8bc1-4cea-a65f-e494847a6b72" (UID: "726d8b9a-8bc1-4cea-a65f-e494847a6b72"). InnerVolumeSpecName "kube-api-access-ggbv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.060369 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "726d8b9a-8bc1-4cea-a65f-e494847a6b72" (UID: "726d8b9a-8bc1-4cea-a65f-e494847a6b72"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.078722 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggbv5\" (UniqueName: \"kubernetes.io/projected/726d8b9a-8bc1-4cea-a65f-e494847a6b72-kube-api-access-ggbv5\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.079060 4832 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-scripts\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.079072 4832 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.091881 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "726d8b9a-8bc1-4cea-a65f-e494847a6b72" (UID: "726d8b9a-8bc1-4cea-a65f-e494847a6b72"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.158012 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-config-data" (OuterVolumeSpecName: "config-data") pod "726d8b9a-8bc1-4cea-a65f-e494847a6b72" (UID: "726d8b9a-8bc1-4cea-a65f-e494847a6b72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.165336 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "726d8b9a-8bc1-4cea-a65f-e494847a6b72" (UID: "726d8b9a-8bc1-4cea-a65f-e494847a6b72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.181129 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.181164 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.181177 4832 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726d8b9a-8bc1-4cea-a65f-e494847a6b72-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.303224 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"726d8b9a-8bc1-4cea-a65f-e494847a6b72","Type":"ContainerDied","Data":"de553126ff3f66a5b994613905ff565beb5aa0ff6377bb13685828d22b994ef9"} Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.303449 4832 scope.go:117] "RemoveContainer" containerID="a8c319a783234ef8e46e550d557694d921e9cc7d1f37f2efa7c64639efd71036" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.303737 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.333666 4832 scope.go:117] "RemoveContainer" containerID="e93da71715267eee779591d3111d9ffb8027577ad4001642a0de7ef82272ee0f" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.363912 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.384925 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.393564 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.393775 4832 scope.go:117] "RemoveContainer" containerID="8ad03b05da840fe55c4236656bf2202fa604cf9de5ef9296fc4551bbbcbb752d" Oct 02 18:48:26 crc kubenswrapper[4832]: E1002 18:48:26.394008 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-notifier" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.394026 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-notifier" Oct 02 18:48:26 crc kubenswrapper[4832]: E1002 18:48:26.394058 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-api" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.394064 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-api" Oct 02 18:48:26 crc kubenswrapper[4832]: E1002 18:48:26.394092 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-evaluator" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.394098 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-evaluator" Oct 02 18:48:26 crc kubenswrapper[4832]: E1002 18:48:26.394115 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-listener" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.394121 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-listener" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.394336 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-notifier" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.394355 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-api" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.394377 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-evaluator" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.394394 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" containerName="aodh-listener" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.397121 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.399882 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.400192 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.400454 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-hgfvc" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.400759 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.404606 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.410140 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.428872 4832 scope.go:117] "RemoveContainer" containerID="346ffe19709bf006fe8f1548ef5133878fe707aa29eb63fcba950e877d3446f6" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.592251 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-public-tls-certs\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.592394 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.592912 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzqn4\" (UniqueName: \"kubernetes.io/projected/6bd80e3d-9654-4e34-8739-e718f4884c75-kube-api-access-lzqn4\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.593094 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-scripts\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.593152 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-config-data\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.593185 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-internal-tls-certs\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.695296 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.696178 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzqn4\" (UniqueName: \"kubernetes.io/projected/6bd80e3d-9654-4e34-8739-e718f4884c75-kube-api-access-lzqn4\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.696298 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-scripts\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.696331 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-config-data\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.696345 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-internal-tls-certs\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.696380 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-public-tls-certs\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.700535 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-combined-ca-bundle\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.701238 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-config-data\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.701507 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-scripts\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.701537 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-internal-tls-certs\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.702177 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6bd80e3d-9654-4e34-8739-e718f4884c75-public-tls-certs\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.715790 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzqn4\" (UniqueName: \"kubernetes.io/projected/6bd80e3d-9654-4e34-8739-e718f4884c75-kube-api-access-lzqn4\") pod \"aodh-0\" (UID: \"6bd80e3d-9654-4e34-8739-e718f4884c75\") " pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.725297 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.875675 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:48:26 crc kubenswrapper[4832]: I1002 18:48:26.875975 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:48:27 crc kubenswrapper[4832]: I1002 18:48:27.243062 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="726d8b9a-8bc1-4cea-a65f-e494847a6b72" path="/var/lib/kubelet/pods/726d8b9a-8bc1-4cea-a65f-e494847a6b72/volumes" Oct 02 18:48:27 crc kubenswrapper[4832]: I1002 18:48:27.244064 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Oct 02 18:48:27 crc kubenswrapper[4832]: I1002 18:48:27.316784 4832 generic.go:334] "Generic (PLEG): container finished" podID="5cf50fba-3a89-451f-adfe-f64eb401d544" containerID="961e1242fa6b2b72325eeb95cddaca683409e2ac1376f7b123c95d8a7d35762b" exitCode=0 Oct 02 18:48:27 crc kubenswrapper[4832]: I1002 18:48:27.316885 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" event={"ID":"5cf50fba-3a89-451f-adfe-f64eb401d544","Type":"ContainerDied","Data":"961e1242fa6b2b72325eeb95cddaca683409e2ac1376f7b123c95d8a7d35762b"} Oct 02 18:48:27 crc kubenswrapper[4832]: I1002 18:48:27.318755 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6bd80e3d-9654-4e34-8739-e718f4884c75","Type":"ContainerStarted","Data":"8877ab4c997f9288ffbef4f67a2f0b0715719a0c7c59134302cc64d05d22524f"} Oct 02 18:48:28 crc kubenswrapper[4832]: I1002 18:48:28.334029 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6bd80e3d-9654-4e34-8739-e718f4884c75","Type":"ContainerStarted","Data":"25722d53bd8d943ffa776f772f88a4ae2ccb8fb4fff9755a2b129d670d905de2"} Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.003505 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.172682 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cf50fba-3a89-451f-adfe-f64eb401d544-inventory\") pod \"5cf50fba-3a89-451f-adfe-f64eb401d544\" (UID: \"5cf50fba-3a89-451f-adfe-f64eb401d544\") " Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.173054 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cf50fba-3a89-451f-adfe-f64eb401d544-ssh-key\") pod \"5cf50fba-3a89-451f-adfe-f64eb401d544\" (UID: \"5cf50fba-3a89-451f-adfe-f64eb401d544\") " Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.173336 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glkxv\" (UniqueName: \"kubernetes.io/projected/5cf50fba-3a89-451f-adfe-f64eb401d544-kube-api-access-glkxv\") pod \"5cf50fba-3a89-451f-adfe-f64eb401d544\" (UID: \"5cf50fba-3a89-451f-adfe-f64eb401d544\") " Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.195289 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cf50fba-3a89-451f-adfe-f64eb401d544-kube-api-access-glkxv" (OuterVolumeSpecName: "kube-api-access-glkxv") pod "5cf50fba-3a89-451f-adfe-f64eb401d544" (UID: "5cf50fba-3a89-451f-adfe-f64eb401d544"). InnerVolumeSpecName "kube-api-access-glkxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.209501 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf50fba-3a89-451f-adfe-f64eb401d544-inventory" (OuterVolumeSpecName: "inventory") pod "5cf50fba-3a89-451f-adfe-f64eb401d544" (UID: "5cf50fba-3a89-451f-adfe-f64eb401d544"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.214651 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf50fba-3a89-451f-adfe-f64eb401d544-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5cf50fba-3a89-451f-adfe-f64eb401d544" (UID: "5cf50fba-3a89-451f-adfe-f64eb401d544"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.294064 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5cf50fba-3a89-451f-adfe-f64eb401d544-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.313359 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cf50fba-3a89-451f-adfe-f64eb401d544-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.314422 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glkxv\" (UniqueName: \"kubernetes.io/projected/5cf50fba-3a89-451f-adfe-f64eb401d544-kube-api-access-glkxv\") on node \"crc\" DevicePath \"\"" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.372662 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.372678 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xzpxr" event={"ID":"5cf50fba-3a89-451f-adfe-f64eb401d544","Type":"ContainerDied","Data":"9f95bc8ccad53e388ff71f7d3b316bcb3ac242b864bf61f2759ae5d34b31e816"} Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.374786 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f95bc8ccad53e388ff71f7d3b316bcb3ac242b864bf61f2759ae5d34b31e816" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.382060 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6bd80e3d-9654-4e34-8739-e718f4884c75","Type":"ContainerStarted","Data":"8ff6168b22f7232184ca9dd6345da9796d31b9a494512a15ff03e81f749f1787"} Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.422365 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd"] Oct 02 18:48:29 crc kubenswrapper[4832]: E1002 18:48:29.422871 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf50fba-3a89-451f-adfe-f64eb401d544" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.422893 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf50fba-3a89-451f-adfe-f64eb401d544" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.423129 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cf50fba-3a89-451f-adfe-f64eb401d544" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.423949 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.426611 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.426973 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.427136 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.427300 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.465870 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd"] Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.519794 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhx24\" (UniqueName: \"kubernetes.io/projected/92baed54-227c-474f-ad5c-b8c14493d2d5-kube-api-access-vhx24\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.519916 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.520417 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.520507 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.622253 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.622774 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.623220 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhx24\" (UniqueName: \"kubernetes.io/projected/92baed54-227c-474f-ad5c-b8c14493d2d5-kube-api-access-vhx24\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.624339 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.628286 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.629710 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.634982 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.647964 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhx24\" (UniqueName: \"kubernetes.io/projected/92baed54-227c-474f-ad5c-b8c14493d2d5-kube-api-access-vhx24\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:48:29 crc kubenswrapper[4832]: I1002 18:48:29.741993 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:48:30 crc kubenswrapper[4832]: I1002 18:48:30.773176 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd"] Oct 02 18:48:31 crc kubenswrapper[4832]: I1002 18:48:31.413208 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" event={"ID":"92baed54-227c-474f-ad5c-b8c14493d2d5","Type":"ContainerStarted","Data":"868093c079e939e91ae65d6e8fd844c11f89fc3fc58045b0a86326eb92bd7082"} Oct 02 18:48:31 crc kubenswrapper[4832]: I1002 18:48:31.416742 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6bd80e3d-9654-4e34-8739-e718f4884c75","Type":"ContainerStarted","Data":"5e4a65c3f281d4bec32f05e5b9618c396d810f69ec1e3651e489f5dcd777e159"} Oct 02 18:48:32 crc kubenswrapper[4832]: I1002 18:48:32.435822 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" event={"ID":"92baed54-227c-474f-ad5c-b8c14493d2d5","Type":"ContainerStarted","Data":"118d3f52ebf7d62d58210b02d288c4f180f9f0aae0b7f63b788d0d05212727b0"} Oct 02 18:48:32 crc kubenswrapper[4832]: I1002 18:48:32.441751 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"6bd80e3d-9654-4e34-8739-e718f4884c75","Type":"ContainerStarted","Data":"8000b7d4a4305c3c1df8dc70122989892dda6e970e9510613fcd421081a3d7ea"} Oct 02 18:48:32 crc kubenswrapper[4832]: I1002 18:48:32.460748 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" podStartSLOduration=2.76114706 podStartE2EDuration="3.460724591s" podCreationTimestamp="2025-10-02 18:48:29 +0000 UTC" firstStartedPulling="2025-10-02 18:48:30.775447496 +0000 UTC m=+1667.744890378" lastFinishedPulling="2025-10-02 18:48:31.475025037 +0000 UTC m=+1668.444467909" observedRunningTime="2025-10-02 18:48:32.45649821 +0000 UTC m=+1669.425941102" watchObservedRunningTime="2025-10-02 18:48:32.460724591 +0000 UTC m=+1669.430167483" Oct 02 18:48:32 crc kubenswrapper[4832]: I1002 18:48:32.492831 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.378859356 podStartE2EDuration="6.492816167s" podCreationTimestamp="2025-10-02 18:48:26 +0000 UTC" firstStartedPulling="2025-10-02 18:48:27.2449297 +0000 UTC m=+1664.214372572" lastFinishedPulling="2025-10-02 18:48:31.358886501 +0000 UTC m=+1668.328329383" observedRunningTime="2025-10-02 18:48:32.488634447 +0000 UTC m=+1669.458077319" watchObservedRunningTime="2025-10-02 18:48:32.492816167 +0000 UTC m=+1669.462259039" Oct 02 18:48:56 crc kubenswrapper[4832]: I1002 18:48:56.875322 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:48:56 crc kubenswrapper[4832]: I1002 18:48:56.875956 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:48:56 crc kubenswrapper[4832]: I1002 18:48:56.876023 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:48:56 crc kubenswrapper[4832]: I1002 18:48:56.878524 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 18:48:56 crc kubenswrapper[4832]: I1002 18:48:56.878651 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" gracePeriod=600 Oct 02 18:48:57 crc kubenswrapper[4832]: E1002 18:48:57.036501 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:48:57 crc kubenswrapper[4832]: I1002 18:48:57.823624 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" exitCode=0 Oct 02 18:48:57 crc kubenswrapper[4832]: I1002 18:48:57.823777 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c"} Oct 02 18:48:57 crc kubenswrapper[4832]: I1002 18:48:57.823946 4832 scope.go:117] "RemoveContainer" containerID="d11c7dd5e816b980d09e31f34fc920edcbd862f94c306350838f9fcadaa3f9f6" Oct 02 18:48:57 crc kubenswrapper[4832]: I1002 18:48:57.825414 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:48:57 crc kubenswrapper[4832]: E1002 18:48:57.826118 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:49:11 crc kubenswrapper[4832]: I1002 18:49:11.223301 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:49:11 crc kubenswrapper[4832]: E1002 18:49:11.224416 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:49:15 crc kubenswrapper[4832]: I1002 18:49:15.192762 4832 scope.go:117] "RemoveContainer" containerID="17957db41c61486cfbf8f9a2a92ca6780f754ee3a70507cee0ff7529315f76a7" Oct 02 18:49:15 crc kubenswrapper[4832]: I1002 18:49:15.254875 4832 scope.go:117] "RemoveContainer" containerID="698459ce9def4a3003acfbd3a6740f3df798590fe26c13dbfbc0b0d862ce0d61" Oct 02 18:49:15 crc kubenswrapper[4832]: I1002 18:49:15.305340 4832 scope.go:117] "RemoveContainer" containerID="9c153939e750e4ee27dfcc15613342fa65a3aba2cd37f230b0096894e5a430bd" Oct 02 18:49:24 crc kubenswrapper[4832]: I1002 18:49:24.224075 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:49:24 crc kubenswrapper[4832]: E1002 18:49:24.225752 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:49:39 crc kubenswrapper[4832]: I1002 18:49:39.222958 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:49:39 crc kubenswrapper[4832]: E1002 18:49:39.224085 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:49:52 crc kubenswrapper[4832]: I1002 18:49:52.223105 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:49:52 crc kubenswrapper[4832]: E1002 18:49:52.224026 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:50:05 crc kubenswrapper[4832]: I1002 18:50:05.245425 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:50:05 crc kubenswrapper[4832]: E1002 18:50:05.246671 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:50:18 crc kubenswrapper[4832]: I1002 18:50:18.223897 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:50:18 crc kubenswrapper[4832]: E1002 18:50:18.225351 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:50:29 crc kubenswrapper[4832]: I1002 18:50:29.223871 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:50:29 crc kubenswrapper[4832]: E1002 18:50:29.225685 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:50:42 crc kubenswrapper[4832]: I1002 18:50:42.223581 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:50:42 crc kubenswrapper[4832]: E1002 18:50:42.225047 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:50:55 crc kubenswrapper[4832]: I1002 18:50:55.052061 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hrk2h"] Oct 02 18:50:55 crc kubenswrapper[4832]: I1002 18:50:55.066891 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jzjbv"] Oct 02 18:50:55 crc kubenswrapper[4832]: I1002 18:50:55.080495 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-9vtdd"] Oct 02 18:50:55 crc kubenswrapper[4832]: I1002 18:50:55.091106 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n7h42"] Oct 02 18:50:55 crc kubenswrapper[4832]: I1002 18:50:55.122898 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hrk2h"] Oct 02 18:50:55 crc kubenswrapper[4832]: I1002 18:50:55.147464 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-9vtdd"] Oct 02 18:50:55 crc kubenswrapper[4832]: I1002 18:50:55.161590 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-n7h42"] Oct 02 18:50:55 crc kubenswrapper[4832]: I1002 18:50:55.173453 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jzjbv"] Oct 02 18:50:55 crc kubenswrapper[4832]: I1002 18:50:55.241094 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f" path="/var/lib/kubelet/pods/7588bcde-ed1b-4a8b-a8a6-bcbecdb9544f/volumes" Oct 02 18:50:55 crc kubenswrapper[4832]: I1002 18:50:55.243008 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86c035df-1cf1-477d-b195-e9096de5360f" path="/var/lib/kubelet/pods/86c035df-1cf1-477d-b195-e9096de5360f/volumes" Oct 02 18:50:55 crc kubenswrapper[4832]: I1002 18:50:55.244498 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6e63ccc-48be-4d43-aff6-144ad30107df" path="/var/lib/kubelet/pods/a6e63ccc-48be-4d43-aff6-144ad30107df/volumes" Oct 02 18:50:55 crc kubenswrapper[4832]: I1002 18:50:55.246361 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea8be728-da97-4ac8-91ed-f43b4c0b249b" path="/var/lib/kubelet/pods/ea8be728-da97-4ac8-91ed-f43b4c0b249b/volumes" Oct 02 18:50:56 crc kubenswrapper[4832]: I1002 18:50:56.224006 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:50:56 crc kubenswrapper[4832]: E1002 18:50:56.224357 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:51:04 crc kubenswrapper[4832]: I1002 18:51:04.094380 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-e572-account-create-8jstm"] Oct 02 18:51:04 crc kubenswrapper[4832]: I1002 18:51:04.104908 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-e572-account-create-8jstm"] Oct 02 18:51:05 crc kubenswrapper[4832]: I1002 18:51:05.240451 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10a6ce2b-ed5d-4115-9d49-a30fd313044e" path="/var/lib/kubelet/pods/10a6ce2b-ed5d-4115-9d49-a30fd313044e/volumes" Oct 02 18:51:07 crc kubenswrapper[4832]: I1002 18:51:07.223692 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:51:07 crc kubenswrapper[4832]: E1002 18:51:07.224281 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:51:09 crc kubenswrapper[4832]: I1002 18:51:09.041628 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5d28-account-create-72ps9"] Oct 02 18:51:09 crc kubenswrapper[4832]: I1002 18:51:09.060117 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2e33-account-create-2p75z"] Oct 02 18:51:09 crc kubenswrapper[4832]: I1002 18:51:09.074722 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5d28-account-create-72ps9"] Oct 02 18:51:09 crc kubenswrapper[4832]: I1002 18:51:09.093054 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2e33-account-create-2p75z"] Oct 02 18:51:09 crc kubenswrapper[4832]: I1002 18:51:09.240721 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc75c972-2235-4b3f-8483-14f22d20f58f" path="/var/lib/kubelet/pods/bc75c972-2235-4b3f-8483-14f22d20f58f/volumes" Oct 02 18:51:09 crc kubenswrapper[4832]: I1002 18:51:09.241863 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f04348c8-f7ec-43e5-a7aa-f216ff10068e" path="/var/lib/kubelet/pods/f04348c8-f7ec-43e5-a7aa-f216ff10068e/volumes" Oct 02 18:51:11 crc kubenswrapper[4832]: I1002 18:51:11.031400 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-fec3-account-create-brv2h"] Oct 02 18:51:11 crc kubenswrapper[4832]: I1002 18:51:11.040890 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-fec3-account-create-brv2h"] Oct 02 18:51:11 crc kubenswrapper[4832]: I1002 18:51:11.236929 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95b6662e-93a0-45b2-9f20-d840085858f3" path="/var/lib/kubelet/pods/95b6662e-93a0-45b2-9f20-d840085858f3/volumes" Oct 02 18:51:15 crc kubenswrapper[4832]: I1002 18:51:15.498032 4832 scope.go:117] "RemoveContainer" containerID="88d288e28e942d9a2bc5b757c86807ecb22509ca4dbff5612c08f8daa0e960a3" Oct 02 18:51:15 crc kubenswrapper[4832]: I1002 18:51:15.577276 4832 scope.go:117] "RemoveContainer" containerID="e2515e5288e3172c916fb56122fe4932316e51c72ece74820b053199b8c2391a" Oct 02 18:51:15 crc kubenswrapper[4832]: I1002 18:51:15.626568 4832 scope.go:117] "RemoveContainer" containerID="098fcc088d803001483b69d8de0faf55d70fa0c89001553c45bb0d2813691889" Oct 02 18:51:15 crc kubenswrapper[4832]: I1002 18:51:15.693122 4832 scope.go:117] "RemoveContainer" containerID="96fb5b4f09e2dbc4cd05f6c754a168bdd6fc655dc0cb24719a58c5573a70871c" Oct 02 18:51:15 crc kubenswrapper[4832]: I1002 18:51:15.752216 4832 scope.go:117] "RemoveContainer" containerID="2280f581cc933666ee8a671dfecb9c3e3ec1e0f1233070307d2f50a86a009d90" Oct 02 18:51:15 crc kubenswrapper[4832]: I1002 18:51:15.817893 4832 scope.go:117] "RemoveContainer" containerID="11695547e5932e697581acc6993725cc4cd96da86561b4a213ec627072825d5d" Oct 02 18:51:15 crc kubenswrapper[4832]: I1002 18:51:15.883208 4832 scope.go:117] "RemoveContainer" containerID="854b736cb4633a488366a6d49a544882ebfff5acc3331f4db92ea942b93aa6a3" Oct 02 18:51:15 crc kubenswrapper[4832]: I1002 18:51:15.931034 4832 scope.go:117] "RemoveContainer" containerID="a55e4f7c19e844e121426c1c34929b755300912a1d79e4756f08f8561b868af9" Oct 02 18:51:15 crc kubenswrapper[4832]: I1002 18:51:15.955169 4832 scope.go:117] "RemoveContainer" containerID="2502e6695845ba61cf2acae39c411c1c8ad40db05cd20265e119f407204e7d3f" Oct 02 18:51:15 crc kubenswrapper[4832]: I1002 18:51:15.991703 4832 scope.go:117] "RemoveContainer" containerID="5cb5e8da29f28563cb7b16d35f18f8c8dd125ddaeb9e3da6460f490f095b407d" Oct 02 18:51:16 crc kubenswrapper[4832]: I1002 18:51:16.019944 4832 scope.go:117] "RemoveContainer" containerID="0dd5ed343af17c2d8a164abac8674da859b1193f75fa25764bfce9787045f494" Oct 02 18:51:20 crc kubenswrapper[4832]: I1002 18:51:20.225907 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:51:20 crc kubenswrapper[4832]: E1002 18:51:20.226951 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:51:24 crc kubenswrapper[4832]: I1002 18:51:24.042677 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c"] Oct 02 18:51:24 crc kubenswrapper[4832]: I1002 18:51:24.053028 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-kdc6c"] Oct 02 18:51:25 crc kubenswrapper[4832]: I1002 18:51:25.236387 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9527bcd2-d70f-485b-a5f7-68ba69f883ec" path="/var/lib/kubelet/pods/9527bcd2-d70f-485b-a5f7-68ba69f883ec/volumes" Oct 02 18:51:28 crc kubenswrapper[4832]: I1002 18:51:28.037411 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-h9rpt"] Oct 02 18:51:28 crc kubenswrapper[4832]: I1002 18:51:28.051891 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-h9rpt"] Oct 02 18:51:29 crc kubenswrapper[4832]: I1002 18:51:29.030372 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-g48l8"] Oct 02 18:51:29 crc kubenswrapper[4832]: I1002 18:51:29.042056 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-6hsqw"] Oct 02 18:51:29 crc kubenswrapper[4832]: I1002 18:51:29.054450 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-c7xhd"] Oct 02 18:51:29 crc kubenswrapper[4832]: I1002 18:51:29.063650 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-g48l8"] Oct 02 18:51:29 crc kubenswrapper[4832]: I1002 18:51:29.073228 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-6hsqw"] Oct 02 18:51:29 crc kubenswrapper[4832]: I1002 18:51:29.083231 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-c7xhd"] Oct 02 18:51:29 crc kubenswrapper[4832]: I1002 18:51:29.244523 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78425b20-8e1d-4853-89a4-09a2c47be243" path="/var/lib/kubelet/pods/78425b20-8e1d-4853-89a4-09a2c47be243/volumes" Oct 02 18:51:29 crc kubenswrapper[4832]: I1002 18:51:29.246519 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="878da378-32e5-4349-9902-1f0a9f75c7c1" path="/var/lib/kubelet/pods/878da378-32e5-4349-9902-1f0a9f75c7c1/volumes" Oct 02 18:51:29 crc kubenswrapper[4832]: I1002 18:51:29.248728 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96d82e23-d6e0-4faf-922d-505c1e637644" path="/var/lib/kubelet/pods/96d82e23-d6e0-4faf-922d-505c1e637644/volumes" Oct 02 18:51:29 crc kubenswrapper[4832]: I1002 18:51:29.250821 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1841a9c-82f5-4ece-8913-264ec2f5bdb2" path="/var/lib/kubelet/pods/b1841a9c-82f5-4ece-8913-264ec2f5bdb2/volumes" Oct 02 18:51:33 crc kubenswrapper[4832]: I1002 18:51:33.224545 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:51:33 crc kubenswrapper[4832]: E1002 18:51:33.225594 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:51:35 crc kubenswrapper[4832]: I1002 18:51:35.178712 4832 generic.go:334] "Generic (PLEG): container finished" podID="92baed54-227c-474f-ad5c-b8c14493d2d5" containerID="118d3f52ebf7d62d58210b02d288c4f180f9f0aae0b7f63b788d0d05212727b0" exitCode=0 Oct 02 18:51:35 crc kubenswrapper[4832]: I1002 18:51:35.178789 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" event={"ID":"92baed54-227c-474f-ad5c-b8c14493d2d5","Type":"ContainerDied","Data":"118d3f52ebf7d62d58210b02d288c4f180f9f0aae0b7f63b788d0d05212727b0"} Oct 02 18:51:36 crc kubenswrapper[4832]: I1002 18:51:36.728162 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:51:36 crc kubenswrapper[4832]: I1002 18:51:36.890426 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-ssh-key\") pod \"92baed54-227c-474f-ad5c-b8c14493d2d5\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " Oct 02 18:51:36 crc kubenswrapper[4832]: I1002 18:51:36.890570 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-bootstrap-combined-ca-bundle\") pod \"92baed54-227c-474f-ad5c-b8c14493d2d5\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " Oct 02 18:51:36 crc kubenswrapper[4832]: I1002 18:51:36.890697 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhx24\" (UniqueName: \"kubernetes.io/projected/92baed54-227c-474f-ad5c-b8c14493d2d5-kube-api-access-vhx24\") pod \"92baed54-227c-474f-ad5c-b8c14493d2d5\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " Oct 02 18:51:36 crc kubenswrapper[4832]: I1002 18:51:36.890987 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-inventory\") pod \"92baed54-227c-474f-ad5c-b8c14493d2d5\" (UID: \"92baed54-227c-474f-ad5c-b8c14493d2d5\") " Oct 02 18:51:36 crc kubenswrapper[4832]: I1002 18:51:36.895633 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92baed54-227c-474f-ad5c-b8c14493d2d5-kube-api-access-vhx24" (OuterVolumeSpecName: "kube-api-access-vhx24") pod "92baed54-227c-474f-ad5c-b8c14493d2d5" (UID: "92baed54-227c-474f-ad5c-b8c14493d2d5"). InnerVolumeSpecName "kube-api-access-vhx24". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:51:36 crc kubenswrapper[4832]: I1002 18:51:36.901002 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "92baed54-227c-474f-ad5c-b8c14493d2d5" (UID: "92baed54-227c-474f-ad5c-b8c14493d2d5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:51:36 crc kubenswrapper[4832]: I1002 18:51:36.924699 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "92baed54-227c-474f-ad5c-b8c14493d2d5" (UID: "92baed54-227c-474f-ad5c-b8c14493d2d5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:51:36 crc kubenswrapper[4832]: I1002 18:51:36.935184 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-inventory" (OuterVolumeSpecName: "inventory") pod "92baed54-227c-474f-ad5c-b8c14493d2d5" (UID: "92baed54-227c-474f-ad5c-b8c14493d2d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:51:36 crc kubenswrapper[4832]: I1002 18:51:36.994557 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhx24\" (UniqueName: \"kubernetes.io/projected/92baed54-227c-474f-ad5c-b8c14493d2d5-kube-api-access-vhx24\") on node \"crc\" DevicePath \"\"" Oct 02 18:51:36 crc kubenswrapper[4832]: I1002 18:51:36.994637 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 18:51:36 crc kubenswrapper[4832]: I1002 18:51:36.994649 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 18:51:36 crc kubenswrapper[4832]: I1002 18:51:36.994660 4832 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92baed54-227c-474f-ad5c-b8c14493d2d5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.035519 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-88a1-account-create-hdt2w"] Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.046635 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-88a1-account-create-hdt2w"] Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.206577 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" event={"ID":"92baed54-227c-474f-ad5c-b8c14493d2d5","Type":"ContainerDied","Data":"868093c079e939e91ae65d6e8fd844c11f89fc3fc58045b0a86326eb92bd7082"} Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.206622 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="868093c079e939e91ae65d6e8fd844c11f89fc3fc58045b0a86326eb92bd7082" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.206725 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.238754 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd676a64-dc24-42de-9103-9e9d58390f23" path="/var/lib/kubelet/pods/dd676a64-dc24-42de-9103-9e9d58390f23/volumes" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.333296 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm"] Oct 02 18:51:37 crc kubenswrapper[4832]: E1002 18:51:37.333780 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92baed54-227c-474f-ad5c-b8c14493d2d5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.333796 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="92baed54-227c-474f-ad5c-b8c14493d2d5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.334017 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="92baed54-227c-474f-ad5c-b8c14493d2d5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.335355 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.341798 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.341988 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.342338 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.360220 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm"] Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.364066 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.424075 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm\" (UID: \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.424200 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24d67\" (UniqueName: \"kubernetes.io/projected/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-kube-api-access-24d67\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm\" (UID: \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.424381 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm\" (UID: \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.526840 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm\" (UID: \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.526909 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24d67\" (UniqueName: \"kubernetes.io/projected/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-kube-api-access-24d67\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm\" (UID: \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.526947 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm\" (UID: \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.532305 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm\" (UID: \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.532409 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm\" (UID: \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.544240 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24d67\" (UniqueName: \"kubernetes.io/projected/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-kube-api-access-24d67\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm\" (UID: \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" Oct 02 18:51:37 crc kubenswrapper[4832]: I1002 18:51:37.696155 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" Oct 02 18:51:38 crc kubenswrapper[4832]: I1002 18:51:38.036165 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-29f5-account-create-jjqdt"] Oct 02 18:51:38 crc kubenswrapper[4832]: I1002 18:51:38.051379 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0640-account-create-l2rlm"] Oct 02 18:51:38 crc kubenswrapper[4832]: I1002 18:51:38.060679 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-29f5-account-create-jjqdt"] Oct 02 18:51:38 crc kubenswrapper[4832]: I1002 18:51:38.069980 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0640-account-create-l2rlm"] Oct 02 18:51:38 crc kubenswrapper[4832]: I1002 18:51:38.418830 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm"] Oct 02 18:51:39 crc kubenswrapper[4832]: I1002 18:51:39.038428 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-vn8w7"] Oct 02 18:51:39 crc kubenswrapper[4832]: I1002 18:51:39.049953 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-vn8w7"] Oct 02 18:51:39 crc kubenswrapper[4832]: I1002 18:51:39.245787 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="373e2d35-0357-4de4-9315-58efaca557f9" path="/var/lib/kubelet/pods/373e2d35-0357-4de4-9315-58efaca557f9/volumes" Oct 02 18:51:39 crc kubenswrapper[4832]: I1002 18:51:39.251761 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="439d6007-1cf6-40ac-9cff-272b66972580" path="/var/lib/kubelet/pods/439d6007-1cf6-40ac-9cff-272b66972580/volumes" Oct 02 18:51:39 crc kubenswrapper[4832]: I1002 18:51:39.252474 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55480b52-9d4b-4b5a-a8b0-4235287fc493" path="/var/lib/kubelet/pods/55480b52-9d4b-4b5a-a8b0-4235287fc493/volumes" Oct 02 18:51:39 crc kubenswrapper[4832]: I1002 18:51:39.253175 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" event={"ID":"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5","Type":"ContainerStarted","Data":"cf35a71613d11fe1d9d07d716252643a14e72d39afc4a9473b5173270cfafc58"} Oct 02 18:51:40 crc kubenswrapper[4832]: I1002 18:51:40.249431 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" event={"ID":"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5","Type":"ContainerStarted","Data":"15717b62fb68cdd5e1cfc365a536500f761f2651bd84593b07cf210c9e54812c"} Oct 02 18:51:40 crc kubenswrapper[4832]: I1002 18:51:40.276663 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" podStartSLOduration=2.343033444 podStartE2EDuration="3.276639042s" podCreationTimestamp="2025-10-02 18:51:37 +0000 UTC" firstStartedPulling="2025-10-02 18:51:38.415476054 +0000 UTC m=+1855.384918966" lastFinishedPulling="2025-10-02 18:51:39.349081692 +0000 UTC m=+1856.318524564" observedRunningTime="2025-10-02 18:51:40.266510458 +0000 UTC m=+1857.235953350" watchObservedRunningTime="2025-10-02 18:51:40.276639042 +0000 UTC m=+1857.246081944" Oct 02 18:51:47 crc kubenswrapper[4832]: I1002 18:51:47.223416 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:51:47 crc kubenswrapper[4832]: E1002 18:51:47.224580 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:51:50 crc kubenswrapper[4832]: I1002 18:51:50.059564 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-bd68-account-create-n5nsg"] Oct 02 18:51:50 crc kubenswrapper[4832]: I1002 18:51:50.071934 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-bae9-account-create-spzmd"] Oct 02 18:51:50 crc kubenswrapper[4832]: I1002 18:51:50.082813 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-bd68-account-create-n5nsg"] Oct 02 18:51:50 crc kubenswrapper[4832]: I1002 18:51:50.096652 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-bae9-account-create-spzmd"] Oct 02 18:51:51 crc kubenswrapper[4832]: I1002 18:51:51.237227 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b99959e1-bdb1-4c91-9256-56fff1ac186b" path="/var/lib/kubelet/pods/b99959e1-bdb1-4c91-9256-56fff1ac186b/volumes" Oct 02 18:51:51 crc kubenswrapper[4832]: I1002 18:51:51.238158 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4b44163-e584-4c80-a5b6-2f8d6e3af4e5" path="/var/lib/kubelet/pods/e4b44163-e584-4c80-a5b6-2f8d6e3af4e5/volumes" Oct 02 18:51:59 crc kubenswrapper[4832]: I1002 18:51:59.223627 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:51:59 crc kubenswrapper[4832]: E1002 18:51:59.224414 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:52:00 crc kubenswrapper[4832]: I1002 18:52:00.045861 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-665w5"] Oct 02 18:52:00 crc kubenswrapper[4832]: I1002 18:52:00.060978 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-665w5"] Oct 02 18:52:01 crc kubenswrapper[4832]: I1002 18:52:01.244586 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83e9ef5-26f5-4ec5-b70c-c28549d863f6" path="/var/lib/kubelet/pods/c83e9ef5-26f5-4ec5-b70c-c28549d863f6/volumes" Oct 02 18:52:09 crc kubenswrapper[4832]: I1002 18:52:09.040935 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hjtkl"] Oct 02 18:52:09 crc kubenswrapper[4832]: I1002 18:52:09.052447 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hjtkl"] Oct 02 18:52:09 crc kubenswrapper[4832]: I1002 18:52:09.237211 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5" path="/var/lib/kubelet/pods/4cfb4fb9-f68d-4b89-99a9-aaa17f6ed1a5/volumes" Oct 02 18:52:10 crc kubenswrapper[4832]: I1002 18:52:10.050464 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-lrhfz"] Oct 02 18:52:10 crc kubenswrapper[4832]: I1002 18:52:10.080007 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-lrhfz"] Oct 02 18:52:10 crc kubenswrapper[4832]: I1002 18:52:10.223153 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:52:10 crc kubenswrapper[4832]: E1002 18:52:10.223509 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:52:11 crc kubenswrapper[4832]: I1002 18:52:11.244988 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78fb0cc0-e570-437c-b527-c925ff84070a" path="/var/lib/kubelet/pods/78fb0cc0-e570-437c-b527-c925ff84070a/volumes" Oct 02 18:52:16 crc kubenswrapper[4832]: I1002 18:52:16.282647 4832 scope.go:117] "RemoveContainer" containerID="483d738e594cafb67754d92264e109dc765b69da20bca45b67008040827736ed" Oct 02 18:52:16 crc kubenswrapper[4832]: I1002 18:52:16.328777 4832 scope.go:117] "RemoveContainer" containerID="018efa9ba3e8f36edd8014d8aab640f3a4793ef64cac8cf6d445dece36750086" Oct 02 18:52:16 crc kubenswrapper[4832]: I1002 18:52:16.425577 4832 scope.go:117] "RemoveContainer" containerID="638211d2af7d4d1a57ea2295ee30ddc19e60a9032fc0cb56d7094450b74be015" Oct 02 18:52:16 crc kubenswrapper[4832]: I1002 18:52:16.471334 4832 scope.go:117] "RemoveContainer" containerID="830350ec229b990c47f085c4b948f743b203a07b08f0deb0ebcf78bfaab3d580" Oct 02 18:52:16 crc kubenswrapper[4832]: I1002 18:52:16.534888 4832 scope.go:117] "RemoveContainer" containerID="d5f7d739fa5a7208f82254b309da66fca2ac4bfec6cd0afd46f18f9f20eacaa5" Oct 02 18:52:16 crc kubenswrapper[4832]: I1002 18:52:16.595009 4832 scope.go:117] "RemoveContainer" containerID="86b306bfc6fa4454735b4e76575e5d07032b55ac0b3d86ddf619f1ce7430c7e3" Oct 02 18:52:16 crc kubenswrapper[4832]: I1002 18:52:16.642137 4832 scope.go:117] "RemoveContainer" containerID="c14bce0a9e02e0f85adc31c561faab72456f39786e56500c65f8b05062b1918b" Oct 02 18:52:16 crc kubenswrapper[4832]: I1002 18:52:16.684410 4832 scope.go:117] "RemoveContainer" containerID="358dbc395ecb88f1f3493a001f23f1722c287e588d7907136af4d48bd40aa9c0" Oct 02 18:52:16 crc kubenswrapper[4832]: I1002 18:52:16.738437 4832 scope.go:117] "RemoveContainer" containerID="e06bd3a855fd570d1520e686e9546b597caeb0e3007a06149252be4e7b1369b2" Oct 02 18:52:16 crc kubenswrapper[4832]: I1002 18:52:16.785700 4832 scope.go:117] "RemoveContainer" containerID="46b679f538c188670cdda0092f087b52c3b05b73ce5adaa8cc17f790bd4c8a26" Oct 02 18:52:16 crc kubenswrapper[4832]: I1002 18:52:16.827000 4832 scope.go:117] "RemoveContainer" containerID="ca2804bd7a6edd4e18308e8f4f9cafc16f23b7632a7042376b21c76dea4dbedd" Oct 02 18:52:16 crc kubenswrapper[4832]: I1002 18:52:16.848109 4832 scope.go:117] "RemoveContainer" containerID="a02f592a27434bb3cf3c09f7e69f49a25273a4b763caa36b8067c0f39c9b8cf5" Oct 02 18:52:16 crc kubenswrapper[4832]: I1002 18:52:16.875415 4832 scope.go:117] "RemoveContainer" containerID="68bfd7208908963d522c5060d2c22b778f9177768f7999cebeceadeb768b85b8" Oct 02 18:52:16 crc kubenswrapper[4832]: I1002 18:52:16.917661 4832 scope.go:117] "RemoveContainer" containerID="28bea44be94a0b461d34dbf1a376ed163a049ad142afcaee894335f27913c1df" Oct 02 18:52:17 crc kubenswrapper[4832]: I1002 18:52:17.069934 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-pwlwm"] Oct 02 18:52:17 crc kubenswrapper[4832]: I1002 18:52:17.082862 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-pwlwm"] Oct 02 18:52:17 crc kubenswrapper[4832]: I1002 18:52:17.245611 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4233545e-957d-4d27-b6b0-ac9825530a13" path="/var/lib/kubelet/pods/4233545e-957d-4d27-b6b0-ac9825530a13/volumes" Oct 02 18:52:24 crc kubenswrapper[4832]: I1002 18:52:24.223134 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:52:24 crc kubenswrapper[4832]: E1002 18:52:24.224007 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:52:26 crc kubenswrapper[4832]: I1002 18:52:26.458025 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b55rd"] Oct 02 18:52:26 crc kubenswrapper[4832]: I1002 18:52:26.463098 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:26 crc kubenswrapper[4832]: I1002 18:52:26.497904 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b55rd"] Oct 02 18:52:26 crc kubenswrapper[4832]: I1002 18:52:26.590772 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd3658dd-cc43-4b83-9f9f-7724df4e9956-utilities\") pod \"redhat-marketplace-b55rd\" (UID: \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\") " pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:26 crc kubenswrapper[4832]: I1002 18:52:26.591005 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rmr6\" (UniqueName: \"kubernetes.io/projected/dd3658dd-cc43-4b83-9f9f-7724df4e9956-kube-api-access-7rmr6\") pod \"redhat-marketplace-b55rd\" (UID: \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\") " pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:26 crc kubenswrapper[4832]: I1002 18:52:26.591088 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd3658dd-cc43-4b83-9f9f-7724df4e9956-catalog-content\") pod \"redhat-marketplace-b55rd\" (UID: \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\") " pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:26 crc kubenswrapper[4832]: I1002 18:52:26.693826 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd3658dd-cc43-4b83-9f9f-7724df4e9956-utilities\") pod \"redhat-marketplace-b55rd\" (UID: \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\") " pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:26 crc kubenswrapper[4832]: I1002 18:52:26.694385 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd3658dd-cc43-4b83-9f9f-7724df4e9956-utilities\") pod \"redhat-marketplace-b55rd\" (UID: \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\") " pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:26 crc kubenswrapper[4832]: I1002 18:52:26.695573 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rmr6\" (UniqueName: \"kubernetes.io/projected/dd3658dd-cc43-4b83-9f9f-7724df4e9956-kube-api-access-7rmr6\") pod \"redhat-marketplace-b55rd\" (UID: \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\") " pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:26 crc kubenswrapper[4832]: I1002 18:52:26.696062 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd3658dd-cc43-4b83-9f9f-7724df4e9956-catalog-content\") pod \"redhat-marketplace-b55rd\" (UID: \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\") " pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:26 crc kubenswrapper[4832]: I1002 18:52:26.696734 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd3658dd-cc43-4b83-9f9f-7724df4e9956-catalog-content\") pod \"redhat-marketplace-b55rd\" (UID: \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\") " pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:26 crc kubenswrapper[4832]: I1002 18:52:26.732840 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rmr6\" (UniqueName: \"kubernetes.io/projected/dd3658dd-cc43-4b83-9f9f-7724df4e9956-kube-api-access-7rmr6\") pod \"redhat-marketplace-b55rd\" (UID: \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\") " pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:26 crc kubenswrapper[4832]: I1002 18:52:26.811691 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:27 crc kubenswrapper[4832]: I1002 18:52:27.326768 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b55rd"] Oct 02 18:52:27 crc kubenswrapper[4832]: I1002 18:52:27.886765 4832 generic.go:334] "Generic (PLEG): container finished" podID="dd3658dd-cc43-4b83-9f9f-7724df4e9956" containerID="96913d37e4fcc44fd00a00b6a2a2ab0c406671688166dec6a64467553e45ca1f" exitCode=0 Oct 02 18:52:27 crc kubenswrapper[4832]: I1002 18:52:27.886859 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b55rd" event={"ID":"dd3658dd-cc43-4b83-9f9f-7724df4e9956","Type":"ContainerDied","Data":"96913d37e4fcc44fd00a00b6a2a2ab0c406671688166dec6a64467553e45ca1f"} Oct 02 18:52:27 crc kubenswrapper[4832]: I1002 18:52:27.887220 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b55rd" event={"ID":"dd3658dd-cc43-4b83-9f9f-7724df4e9956","Type":"ContainerStarted","Data":"81fea8f6c74d65effb23c6746925723e95a2bcb38a51ef0985a851ff1910cc87"} Oct 02 18:52:27 crc kubenswrapper[4832]: I1002 18:52:27.889876 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 18:52:28 crc kubenswrapper[4832]: I1002 18:52:28.051761 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-h97bp"] Oct 02 18:52:28 crc kubenswrapper[4832]: I1002 18:52:28.061560 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-h97bp"] Oct 02 18:52:29 crc kubenswrapper[4832]: I1002 18:52:29.238962 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77fb37d1-dfa6-4ade-9bad-6263a7f22277" path="/var/lib/kubelet/pods/77fb37d1-dfa6-4ade-9bad-6263a7f22277/volumes" Oct 02 18:52:29 crc kubenswrapper[4832]: I1002 18:52:29.909996 4832 generic.go:334] "Generic (PLEG): container finished" podID="dd3658dd-cc43-4b83-9f9f-7724df4e9956" containerID="e4972710416f268fb8b00785b6eecf23d8d5e00ecca861b2cbfa0a83336d10ed" exitCode=0 Oct 02 18:52:29 crc kubenswrapper[4832]: I1002 18:52:29.910041 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b55rd" event={"ID":"dd3658dd-cc43-4b83-9f9f-7724df4e9956","Type":"ContainerDied","Data":"e4972710416f268fb8b00785b6eecf23d8d5e00ecca861b2cbfa0a83336d10ed"} Oct 02 18:52:30 crc kubenswrapper[4832]: I1002 18:52:30.931159 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b55rd" event={"ID":"dd3658dd-cc43-4b83-9f9f-7724df4e9956","Type":"ContainerStarted","Data":"179fcfff8cbcc3395e1f9f16abc18bf1e857dcc23032c3649a6c5f7777ef3639"} Oct 02 18:52:30 crc kubenswrapper[4832]: I1002 18:52:30.964390 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b55rd" podStartSLOduration=2.525193759 podStartE2EDuration="4.964366148s" podCreationTimestamp="2025-10-02 18:52:26 +0000 UTC" firstStartedPulling="2025-10-02 18:52:27.889485574 +0000 UTC m=+1904.858928446" lastFinishedPulling="2025-10-02 18:52:30.328657963 +0000 UTC m=+1907.298100835" observedRunningTime="2025-10-02 18:52:30.959662032 +0000 UTC m=+1907.929104904" watchObservedRunningTime="2025-10-02 18:52:30.964366148 +0000 UTC m=+1907.933809020" Oct 02 18:52:35 crc kubenswrapper[4832]: I1002 18:52:35.238106 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:52:35 crc kubenswrapper[4832]: E1002 18:52:35.239328 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:52:36 crc kubenswrapper[4832]: I1002 18:52:36.811889 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:36 crc kubenswrapper[4832]: I1002 18:52:36.812214 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:36 crc kubenswrapper[4832]: I1002 18:52:36.878529 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:37 crc kubenswrapper[4832]: I1002 18:52:37.056420 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:37 crc kubenswrapper[4832]: I1002 18:52:37.119711 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b55rd"] Oct 02 18:52:39 crc kubenswrapper[4832]: I1002 18:52:39.035001 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b55rd" podUID="dd3658dd-cc43-4b83-9f9f-7724df4e9956" containerName="registry-server" containerID="cri-o://179fcfff8cbcc3395e1f9f16abc18bf1e857dcc23032c3649a6c5f7777ef3639" gracePeriod=2 Oct 02 18:52:39 crc kubenswrapper[4832]: I1002 18:52:39.726675 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:39 crc kubenswrapper[4832]: I1002 18:52:39.806860 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rmr6\" (UniqueName: \"kubernetes.io/projected/dd3658dd-cc43-4b83-9f9f-7724df4e9956-kube-api-access-7rmr6\") pod \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\" (UID: \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\") " Oct 02 18:52:39 crc kubenswrapper[4832]: I1002 18:52:39.807053 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd3658dd-cc43-4b83-9f9f-7724df4e9956-utilities\") pod \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\" (UID: \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\") " Oct 02 18:52:39 crc kubenswrapper[4832]: I1002 18:52:39.807255 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd3658dd-cc43-4b83-9f9f-7724df4e9956-catalog-content\") pod \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\" (UID: \"dd3658dd-cc43-4b83-9f9f-7724df4e9956\") " Oct 02 18:52:39 crc kubenswrapper[4832]: I1002 18:52:39.807687 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd3658dd-cc43-4b83-9f9f-7724df4e9956-utilities" (OuterVolumeSpecName: "utilities") pod "dd3658dd-cc43-4b83-9f9f-7724df4e9956" (UID: "dd3658dd-cc43-4b83-9f9f-7724df4e9956"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:52:39 crc kubenswrapper[4832]: I1002 18:52:39.807852 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd3658dd-cc43-4b83-9f9f-7724df4e9956-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:52:39 crc kubenswrapper[4832]: I1002 18:52:39.811900 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd3658dd-cc43-4b83-9f9f-7724df4e9956-kube-api-access-7rmr6" (OuterVolumeSpecName: "kube-api-access-7rmr6") pod "dd3658dd-cc43-4b83-9f9f-7724df4e9956" (UID: "dd3658dd-cc43-4b83-9f9f-7724df4e9956"). InnerVolumeSpecName "kube-api-access-7rmr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:52:39 crc kubenswrapper[4832]: I1002 18:52:39.822420 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd3658dd-cc43-4b83-9f9f-7724df4e9956-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd3658dd-cc43-4b83-9f9f-7724df4e9956" (UID: "dd3658dd-cc43-4b83-9f9f-7724df4e9956"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:52:39 crc kubenswrapper[4832]: I1002 18:52:39.910193 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd3658dd-cc43-4b83-9f9f-7724df4e9956-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:52:39 crc kubenswrapper[4832]: I1002 18:52:39.910220 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rmr6\" (UniqueName: \"kubernetes.io/projected/dd3658dd-cc43-4b83-9f9f-7724df4e9956-kube-api-access-7rmr6\") on node \"crc\" DevicePath \"\"" Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.048665 4832 generic.go:334] "Generic (PLEG): container finished" podID="dd3658dd-cc43-4b83-9f9f-7724df4e9956" containerID="179fcfff8cbcc3395e1f9f16abc18bf1e857dcc23032c3649a6c5f7777ef3639" exitCode=0 Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.048856 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b55rd" event={"ID":"dd3658dd-cc43-4b83-9f9f-7724df4e9956","Type":"ContainerDied","Data":"179fcfff8cbcc3395e1f9f16abc18bf1e857dcc23032c3649a6c5f7777ef3639"} Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.048940 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b55rd" Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.049718 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b55rd" event={"ID":"dd3658dd-cc43-4b83-9f9f-7724df4e9956","Type":"ContainerDied","Data":"81fea8f6c74d65effb23c6746925723e95a2bcb38a51ef0985a851ff1910cc87"} Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.049778 4832 scope.go:117] "RemoveContainer" containerID="179fcfff8cbcc3395e1f9f16abc18bf1e857dcc23032c3649a6c5f7777ef3639" Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.080248 4832 scope.go:117] "RemoveContainer" containerID="e4972710416f268fb8b00785b6eecf23d8d5e00ecca861b2cbfa0a83336d10ed" Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.115315 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b55rd"] Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.134087 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b55rd"] Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.150288 4832 scope.go:117] "RemoveContainer" containerID="96913d37e4fcc44fd00a00b6a2a2ab0c406671688166dec6a64467553e45ca1f" Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.196028 4832 scope.go:117] "RemoveContainer" containerID="179fcfff8cbcc3395e1f9f16abc18bf1e857dcc23032c3649a6c5f7777ef3639" Oct 02 18:52:40 crc kubenswrapper[4832]: E1002 18:52:40.196830 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"179fcfff8cbcc3395e1f9f16abc18bf1e857dcc23032c3649a6c5f7777ef3639\": container with ID starting with 179fcfff8cbcc3395e1f9f16abc18bf1e857dcc23032c3649a6c5f7777ef3639 not found: ID does not exist" containerID="179fcfff8cbcc3395e1f9f16abc18bf1e857dcc23032c3649a6c5f7777ef3639" Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.196938 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179fcfff8cbcc3395e1f9f16abc18bf1e857dcc23032c3649a6c5f7777ef3639"} err="failed to get container status \"179fcfff8cbcc3395e1f9f16abc18bf1e857dcc23032c3649a6c5f7777ef3639\": rpc error: code = NotFound desc = could not find container \"179fcfff8cbcc3395e1f9f16abc18bf1e857dcc23032c3649a6c5f7777ef3639\": container with ID starting with 179fcfff8cbcc3395e1f9f16abc18bf1e857dcc23032c3649a6c5f7777ef3639 not found: ID does not exist" Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.196975 4832 scope.go:117] "RemoveContainer" containerID="e4972710416f268fb8b00785b6eecf23d8d5e00ecca861b2cbfa0a83336d10ed" Oct 02 18:52:40 crc kubenswrapper[4832]: E1002 18:52:40.197668 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4972710416f268fb8b00785b6eecf23d8d5e00ecca861b2cbfa0a83336d10ed\": container with ID starting with e4972710416f268fb8b00785b6eecf23d8d5e00ecca861b2cbfa0a83336d10ed not found: ID does not exist" containerID="e4972710416f268fb8b00785b6eecf23d8d5e00ecca861b2cbfa0a83336d10ed" Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.197715 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4972710416f268fb8b00785b6eecf23d8d5e00ecca861b2cbfa0a83336d10ed"} err="failed to get container status \"e4972710416f268fb8b00785b6eecf23d8d5e00ecca861b2cbfa0a83336d10ed\": rpc error: code = NotFound desc = could not find container \"e4972710416f268fb8b00785b6eecf23d8d5e00ecca861b2cbfa0a83336d10ed\": container with ID starting with e4972710416f268fb8b00785b6eecf23d8d5e00ecca861b2cbfa0a83336d10ed not found: ID does not exist" Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.197751 4832 scope.go:117] "RemoveContainer" containerID="96913d37e4fcc44fd00a00b6a2a2ab0c406671688166dec6a64467553e45ca1f" Oct 02 18:52:40 crc kubenswrapper[4832]: E1002 18:52:40.198519 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96913d37e4fcc44fd00a00b6a2a2ab0c406671688166dec6a64467553e45ca1f\": container with ID starting with 96913d37e4fcc44fd00a00b6a2a2ab0c406671688166dec6a64467553e45ca1f not found: ID does not exist" containerID="96913d37e4fcc44fd00a00b6a2a2ab0c406671688166dec6a64467553e45ca1f" Oct 02 18:52:40 crc kubenswrapper[4832]: I1002 18:52:40.198566 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96913d37e4fcc44fd00a00b6a2a2ab0c406671688166dec6a64467553e45ca1f"} err="failed to get container status \"96913d37e4fcc44fd00a00b6a2a2ab0c406671688166dec6a64467553e45ca1f\": rpc error: code = NotFound desc = could not find container \"96913d37e4fcc44fd00a00b6a2a2ab0c406671688166dec6a64467553e45ca1f\": container with ID starting with 96913d37e4fcc44fd00a00b6a2a2ab0c406671688166dec6a64467553e45ca1f not found: ID does not exist" Oct 02 18:52:41 crc kubenswrapper[4832]: I1002 18:52:41.238764 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd3658dd-cc43-4b83-9f9f-7724df4e9956" path="/var/lib/kubelet/pods/dd3658dd-cc43-4b83-9f9f-7724df4e9956/volumes" Oct 02 18:52:50 crc kubenswrapper[4832]: I1002 18:52:50.223370 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:52:50 crc kubenswrapper[4832]: E1002 18:52:50.225420 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:52:58 crc kubenswrapper[4832]: I1002 18:52:58.045327 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-5gtbq"] Oct 02 18:52:58 crc kubenswrapper[4832]: I1002 18:52:58.054790 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-5gtbq"] Oct 02 18:52:59 crc kubenswrapper[4832]: I1002 18:52:59.244000 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03462b3-a4a5-441c-93c5-1f0008d95f21" path="/var/lib/kubelet/pods/f03462b3-a4a5-441c-93c5-1f0008d95f21/volumes" Oct 02 18:53:04 crc kubenswrapper[4832]: I1002 18:53:04.223437 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:53:04 crc kubenswrapper[4832]: E1002 18:53:04.224151 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:53:17 crc kubenswrapper[4832]: I1002 18:53:17.343859 4832 scope.go:117] "RemoveContainer" containerID="44f55dcef8c69307c8ecf514e7467393f40b59cc988f5fb1e6238e6581f8f1f6" Oct 02 18:53:17 crc kubenswrapper[4832]: I1002 18:53:17.416224 4832 scope.go:117] "RemoveContainer" containerID="7d2e251dafc0c96bb3b4fa434da9173febda388b9577597e33fa89faa96abb1d" Oct 02 18:53:17 crc kubenswrapper[4832]: I1002 18:53:17.498407 4832 scope.go:117] "RemoveContainer" containerID="407d7211cfa1d4972189e68fb48ce36b33b39b30725ce227d0a718c9f57bd8c3" Oct 02 18:53:19 crc kubenswrapper[4832]: I1002 18:53:19.223861 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:53:19 crc kubenswrapper[4832]: E1002 18:53:19.224323 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:53:26 crc kubenswrapper[4832]: I1002 18:53:26.085571 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-q9dvz"] Oct 02 18:53:26 crc kubenswrapper[4832]: I1002 18:53:26.104965 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-r92d6"] Oct 02 18:53:26 crc kubenswrapper[4832]: I1002 18:53:26.116084 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-q9dvz"] Oct 02 18:53:26 crc kubenswrapper[4832]: I1002 18:53:26.131130 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-r92d6"] Oct 02 18:53:26 crc kubenswrapper[4832]: I1002 18:53:26.148863 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-p4lnx"] Oct 02 18:53:26 crc kubenswrapper[4832]: I1002 18:53:26.158675 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-p4lnx"] Oct 02 18:53:27 crc kubenswrapper[4832]: I1002 18:53:27.237450 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37fb1e5a-5c6b-41c5-a77a-10ed80318ea4" path="/var/lib/kubelet/pods/37fb1e5a-5c6b-41c5-a77a-10ed80318ea4/volumes" Oct 02 18:53:27 crc kubenswrapper[4832]: I1002 18:53:27.238256 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43801b93-9634-4b11-995a-60ce9116aac4" path="/var/lib/kubelet/pods/43801b93-9634-4b11-995a-60ce9116aac4/volumes" Oct 02 18:53:27 crc kubenswrapper[4832]: I1002 18:53:27.238926 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc" path="/var/lib/kubelet/pods/4eeaae3e-253d-4ebc-a3e4-5ebab1635ccc/volumes" Oct 02 18:53:33 crc kubenswrapper[4832]: I1002 18:53:33.224412 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:53:33 crc kubenswrapper[4832]: E1002 18:53:33.225677 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:53:37 crc kubenswrapper[4832]: I1002 18:53:37.776633 4832 generic.go:334] "Generic (PLEG): container finished" podID="6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5" containerID="15717b62fb68cdd5e1cfc365a536500f761f2651bd84593b07cf210c9e54812c" exitCode=0 Oct 02 18:53:37 crc kubenswrapper[4832]: I1002 18:53:37.776728 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" event={"ID":"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5","Type":"ContainerDied","Data":"15717b62fb68cdd5e1cfc365a536500f761f2651bd84593b07cf210c9e54812c"} Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.363555 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.516805 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-inventory\") pod \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\" (UID: \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\") " Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.517351 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24d67\" (UniqueName: \"kubernetes.io/projected/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-kube-api-access-24d67\") pod \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\" (UID: \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\") " Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.518225 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-ssh-key\") pod \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\" (UID: \"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5\") " Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.522456 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-kube-api-access-24d67" (OuterVolumeSpecName: "kube-api-access-24d67") pod "6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5" (UID: "6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5"). InnerVolumeSpecName "kube-api-access-24d67". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.561660 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5" (UID: "6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.567812 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-inventory" (OuterVolumeSpecName: "inventory") pod "6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5" (UID: "6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.622718 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.622757 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24d67\" (UniqueName: \"kubernetes.io/projected/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-kube-api-access-24d67\") on node \"crc\" DevicePath \"\"" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.622772 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.809940 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" event={"ID":"6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5","Type":"ContainerDied","Data":"cf35a71613d11fe1d9d07d716252643a14e72d39afc4a9473b5173270cfafc58"} Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.810011 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf35a71613d11fe1d9d07d716252643a14e72d39afc4a9473b5173270cfafc58" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.810172 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.912976 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz"] Oct 02 18:53:39 crc kubenswrapper[4832]: E1002 18:53:39.913528 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3658dd-cc43-4b83-9f9f-7724df4e9956" containerName="registry-server" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.913549 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3658dd-cc43-4b83-9f9f-7724df4e9956" containerName="registry-server" Oct 02 18:53:39 crc kubenswrapper[4832]: E1002 18:53:39.913568 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3658dd-cc43-4b83-9f9f-7724df4e9956" containerName="extract-utilities" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.913575 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3658dd-cc43-4b83-9f9f-7724df4e9956" containerName="extract-utilities" Oct 02 18:53:39 crc kubenswrapper[4832]: E1002 18:53:39.913592 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.913600 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 02 18:53:39 crc kubenswrapper[4832]: E1002 18:53:39.913627 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3658dd-cc43-4b83-9f9f-7724df4e9956" containerName="extract-content" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.913636 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3658dd-cc43-4b83-9f9f-7724df4e9956" containerName="extract-content" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.913881 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.913910 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd3658dd-cc43-4b83-9f9f-7724df4e9956" containerName="registry-server" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.914751 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.919494 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.919550 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.919852 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.921964 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.935558 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-px9xz\" (UID: \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.935635 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-px9xz\" (UID: \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.935713 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x6bv\" (UniqueName: \"kubernetes.io/projected/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-kube-api-access-6x6bv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-px9xz\" (UID: \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" Oct 02 18:53:39 crc kubenswrapper[4832]: I1002 18:53:39.937511 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz"] Oct 02 18:53:40 crc kubenswrapper[4832]: I1002 18:53:40.036977 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-px9xz\" (UID: \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" Oct 02 18:53:40 crc kubenswrapper[4832]: I1002 18:53:40.037036 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x6bv\" (UniqueName: \"kubernetes.io/projected/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-kube-api-access-6x6bv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-px9xz\" (UID: \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" Oct 02 18:53:40 crc kubenswrapper[4832]: I1002 18:53:40.037306 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-px9xz\" (UID: \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" Oct 02 18:53:40 crc kubenswrapper[4832]: I1002 18:53:40.040901 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-px9xz\" (UID: \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" Oct 02 18:53:40 crc kubenswrapper[4832]: I1002 18:53:40.048460 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-px9xz\" (UID: \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" Oct 02 18:53:40 crc kubenswrapper[4832]: I1002 18:53:40.060121 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x6bv\" (UniqueName: \"kubernetes.io/projected/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-kube-api-access-6x6bv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-px9xz\" (UID: \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" Oct 02 18:53:40 crc kubenswrapper[4832]: I1002 18:53:40.235092 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" Oct 02 18:53:40 crc kubenswrapper[4832]: I1002 18:53:40.871812 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz"] Oct 02 18:53:40 crc kubenswrapper[4832]: W1002 18:53:40.874486 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5610cb4e_4f23_4a76_b59c_5e3db6b532ff.slice/crio-2b85f7801aaed461f8fe9773943fba61102823ae22da04dc4edc455460d0ccb2 WatchSource:0}: Error finding container 2b85f7801aaed461f8fe9773943fba61102823ae22da04dc4edc455460d0ccb2: Status 404 returned error can't find the container with id 2b85f7801aaed461f8fe9773943fba61102823ae22da04dc4edc455460d0ccb2 Oct 02 18:53:41 crc kubenswrapper[4832]: I1002 18:53:41.835687 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" event={"ID":"5610cb4e-4f23-4a76-b59c-5e3db6b532ff","Type":"ContainerStarted","Data":"2b85f7801aaed461f8fe9773943fba61102823ae22da04dc4edc455460d0ccb2"} Oct 02 18:53:42 crc kubenswrapper[4832]: I1002 18:53:42.848840 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" event={"ID":"5610cb4e-4f23-4a76-b59c-5e3db6b532ff","Type":"ContainerStarted","Data":"e5a44571a475af07c711a6986cb0118c5ee808db431de75f2e47da86425f3d84"} Oct 02 18:53:42 crc kubenswrapper[4832]: I1002 18:53:42.881566 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" podStartSLOduration=3.200247436 podStartE2EDuration="3.881541429s" podCreationTimestamp="2025-10-02 18:53:39 +0000 UTC" firstStartedPulling="2025-10-02 18:53:40.877205321 +0000 UTC m=+1977.846648233" lastFinishedPulling="2025-10-02 18:53:41.558499344 +0000 UTC m=+1978.527942226" observedRunningTime="2025-10-02 18:53:42.880724924 +0000 UTC m=+1979.850167796" watchObservedRunningTime="2025-10-02 18:53:42.881541429 +0000 UTC m=+1979.850984301" Oct 02 18:53:44 crc kubenswrapper[4832]: I1002 18:53:44.223653 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:53:44 crc kubenswrapper[4832]: E1002 18:53:44.224524 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 18:53:45 crc kubenswrapper[4832]: I1002 18:53:45.055592 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5495-account-create-cfhsk"] Oct 02 18:53:45 crc kubenswrapper[4832]: I1002 18:53:45.076301 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2c18-account-create-wmc9q"] Oct 02 18:53:45 crc kubenswrapper[4832]: I1002 18:53:45.098882 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5495-account-create-cfhsk"] Oct 02 18:53:45 crc kubenswrapper[4832]: I1002 18:53:45.114426 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-3b08-account-create-klnrc"] Oct 02 18:53:45 crc kubenswrapper[4832]: I1002 18:53:45.127553 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-3b08-account-create-klnrc"] Oct 02 18:53:45 crc kubenswrapper[4832]: I1002 18:53:45.138200 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2c18-account-create-wmc9q"] Oct 02 18:53:45 crc kubenswrapper[4832]: I1002 18:53:45.241216 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1" path="/var/lib/kubelet/pods/36c5c7b1-24e3-4c46-8b92-308ce6e4fbc1/volumes" Oct 02 18:53:45 crc kubenswrapper[4832]: I1002 18:53:45.242533 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b79da8-3130-4a24-b34e-5179d295a543" path="/var/lib/kubelet/pods/55b79da8-3130-4a24-b34e-5179d295a543/volumes" Oct 02 18:53:45 crc kubenswrapper[4832]: I1002 18:53:45.243142 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e19f6233-ad7b-418c-a4a0-b2ffaafbbb52" path="/var/lib/kubelet/pods/e19f6233-ad7b-418c-a4a0-b2ffaafbbb52/volumes" Oct 02 18:53:57 crc kubenswrapper[4832]: I1002 18:53:57.223562 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:53:58 crc kubenswrapper[4832]: I1002 18:53:58.061521 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"bad383d75da31b854d1e8d51851deee9c385d51d3a1bd396750d2fce236862ee"} Oct 02 18:54:05 crc kubenswrapper[4832]: I1002 18:54:05.045940 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-q2dcq"] Oct 02 18:54:05 crc kubenswrapper[4832]: I1002 18:54:05.061639 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-q2dcq"] Oct 02 18:54:05 crc kubenswrapper[4832]: I1002 18:54:05.239482 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4776bec7-08b6-4900-8d4d-40074945b0dd" path="/var/lib/kubelet/pods/4776bec7-08b6-4900-8d4d-40074945b0dd/volumes" Oct 02 18:54:17 crc kubenswrapper[4832]: I1002 18:54:17.664768 4832 scope.go:117] "RemoveContainer" containerID="c174d68e70894850142250ba6fd6ecfd51a8f72b8fcee769e8f043eab010dabf" Oct 02 18:54:17 crc kubenswrapper[4832]: I1002 18:54:17.724725 4832 scope.go:117] "RemoveContainer" containerID="b5a94a8f2d0ecff078823b934c29966423dad27aa002f78a91c6e12bf93bca94" Oct 02 18:54:17 crc kubenswrapper[4832]: I1002 18:54:17.761931 4832 scope.go:117] "RemoveContainer" containerID="2ed7834be22756aae562894ef11a27ea49f844e6f8ce26a212776e76fe3c60d0" Oct 02 18:54:17 crc kubenswrapper[4832]: I1002 18:54:17.839069 4832 scope.go:117] "RemoveContainer" containerID="f8dcc0e4f8736fa2c15c4f860f5f0bac26a9032f1666e6ba33de830709a7802c" Oct 02 18:54:17 crc kubenswrapper[4832]: I1002 18:54:17.897245 4832 scope.go:117] "RemoveContainer" containerID="fb886f1bc09a12f7c592ad96f6440e96e7c85227bc25af979d9de0be19dc2609" Oct 02 18:54:17 crc kubenswrapper[4832]: I1002 18:54:17.946872 4832 scope.go:117] "RemoveContainer" containerID="973f91391ff7e877bd23ff27ae14a5b6cc0ddc25bd7dfc566a90c50aa29ebe0f" Oct 02 18:54:18 crc kubenswrapper[4832]: I1002 18:54:18.000174 4832 scope.go:117] "RemoveContainer" containerID="5dc172bbd4ee3a181801ce87ea53a420f0fbec94a5ee52921498bd103a0e1837" Oct 02 18:54:20 crc kubenswrapper[4832]: I1002 18:54:20.055672 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-eb3e-account-create-bjgln"] Oct 02 18:54:20 crc kubenswrapper[4832]: I1002 18:54:20.063808 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-eb3e-account-create-bjgln"] Oct 02 18:54:21 crc kubenswrapper[4832]: I1002 18:54:21.244529 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afde15af-0d85-4b31-8bb1-da2cdbf8bbe9" path="/var/lib/kubelet/pods/afde15af-0d85-4b31-8bb1-da2cdbf8bbe9/volumes" Oct 02 18:54:22 crc kubenswrapper[4832]: I1002 18:54:22.040610 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5lmj2"] Oct 02 18:54:22 crc kubenswrapper[4832]: I1002 18:54:22.054192 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5lmj2"] Oct 02 18:54:23 crc kubenswrapper[4832]: I1002 18:54:23.238397 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e36d9377-cbc4-4760-ae43-3065dfe614fe" path="/var/lib/kubelet/pods/e36d9377-cbc4-4760-ae43-3065dfe614fe/volumes" Oct 02 18:54:47 crc kubenswrapper[4832]: I1002 18:54:47.052841 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-kr474"] Oct 02 18:54:47 crc kubenswrapper[4832]: I1002 18:54:47.065672 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-kr474"] Oct 02 18:54:47 crc kubenswrapper[4832]: I1002 18:54:47.242982 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed3933a2-ea03-4354-bfa4-1ec240e12c9d" path="/var/lib/kubelet/pods/ed3933a2-ea03-4354-bfa4-1ec240e12c9d/volumes" Oct 02 18:54:53 crc kubenswrapper[4832]: I1002 18:54:53.049631 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zznjp"] Oct 02 18:54:53 crc kubenswrapper[4832]: I1002 18:54:53.060797 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zznjp"] Oct 02 18:54:53 crc kubenswrapper[4832]: I1002 18:54:53.256611 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88751a34-122e-469a-955d-d91072955b66" path="/var/lib/kubelet/pods/88751a34-122e-469a-955d-d91072955b66/volumes" Oct 02 18:54:55 crc kubenswrapper[4832]: I1002 18:54:55.801327 4832 generic.go:334] "Generic (PLEG): container finished" podID="5610cb4e-4f23-4a76-b59c-5e3db6b532ff" containerID="e5a44571a475af07c711a6986cb0118c5ee808db431de75f2e47da86425f3d84" exitCode=0 Oct 02 18:54:55 crc kubenswrapper[4832]: I1002 18:54:55.801507 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" event={"ID":"5610cb4e-4f23-4a76-b59c-5e3db6b532ff","Type":"ContainerDied","Data":"e5a44571a475af07c711a6986cb0118c5ee808db431de75f2e47da86425f3d84"} Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.309294 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.423527 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x6bv\" (UniqueName: \"kubernetes.io/projected/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-kube-api-access-6x6bv\") pod \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\" (UID: \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\") " Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.424120 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-inventory\") pod \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\" (UID: \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\") " Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.424382 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-ssh-key\") pod \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\" (UID: \"5610cb4e-4f23-4a76-b59c-5e3db6b532ff\") " Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.431317 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-kube-api-access-6x6bv" (OuterVolumeSpecName: "kube-api-access-6x6bv") pod "5610cb4e-4f23-4a76-b59c-5e3db6b532ff" (UID: "5610cb4e-4f23-4a76-b59c-5e3db6b532ff"). InnerVolumeSpecName "kube-api-access-6x6bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.480844 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5610cb4e-4f23-4a76-b59c-5e3db6b532ff" (UID: "5610cb4e-4f23-4a76-b59c-5e3db6b532ff"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.481493 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-inventory" (OuterVolumeSpecName: "inventory") pod "5610cb4e-4f23-4a76-b59c-5e3db6b532ff" (UID: "5610cb4e-4f23-4a76-b59c-5e3db6b532ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.527558 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.527594 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.527606 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x6bv\" (UniqueName: \"kubernetes.io/projected/5610cb4e-4f23-4a76-b59c-5e3db6b532ff-kube-api-access-6x6bv\") on node \"crc\" DevicePath \"\"" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.840387 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" event={"ID":"5610cb4e-4f23-4a76-b59c-5e3db6b532ff","Type":"ContainerDied","Data":"2b85f7801aaed461f8fe9773943fba61102823ae22da04dc4edc455460d0ccb2"} Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.840919 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b85f7801aaed461f8fe9773943fba61102823ae22da04dc4edc455460d0ccb2" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.841056 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-px9xz" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.934050 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf"] Oct 02 18:54:57 crc kubenswrapper[4832]: E1002 18:54:57.934840 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5610cb4e-4f23-4a76-b59c-5e3db6b532ff" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.934868 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5610cb4e-4f23-4a76-b59c-5e3db6b532ff" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.935319 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5610cb4e-4f23-4a76-b59c-5e3db6b532ff" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.936936 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.939529 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.940085 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.940560 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.941146 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 18:54:57 crc kubenswrapper[4832]: I1002 18:54:57.942998 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf"] Oct 02 18:54:58 crc kubenswrapper[4832]: I1002 18:54:58.043148 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmjl2\" (UniqueName: \"kubernetes.io/projected/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-kube-api-access-lmjl2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf\" (UID: \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" Oct 02 18:54:58 crc kubenswrapper[4832]: I1002 18:54:58.043217 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf\" (UID: \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" Oct 02 18:54:58 crc kubenswrapper[4832]: I1002 18:54:58.043279 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf\" (UID: \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" Oct 02 18:54:58 crc kubenswrapper[4832]: I1002 18:54:58.145626 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf\" (UID: \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" Oct 02 18:54:58 crc kubenswrapper[4832]: I1002 18:54:58.145753 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf\" (UID: \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" Oct 02 18:54:58 crc kubenswrapper[4832]: I1002 18:54:58.146122 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmjl2\" (UniqueName: \"kubernetes.io/projected/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-kube-api-access-lmjl2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf\" (UID: \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" Oct 02 18:54:58 crc kubenswrapper[4832]: I1002 18:54:58.160252 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf\" (UID: \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" Oct 02 18:54:58 crc kubenswrapper[4832]: I1002 18:54:58.160636 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf\" (UID: \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" Oct 02 18:54:58 crc kubenswrapper[4832]: I1002 18:54:58.168964 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmjl2\" (UniqueName: \"kubernetes.io/projected/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-kube-api-access-lmjl2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf\" (UID: \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" Oct 02 18:54:58 crc kubenswrapper[4832]: I1002 18:54:58.271123 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" Oct 02 18:54:58 crc kubenswrapper[4832]: I1002 18:54:58.852780 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf"] Oct 02 18:54:59 crc kubenswrapper[4832]: I1002 18:54:59.866163 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" event={"ID":"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30","Type":"ContainerStarted","Data":"b643f7940a66105e5d222cb1a6acf2bce7d91c5c72d3f8b9185227299b4ef030"} Oct 02 18:55:04 crc kubenswrapper[4832]: I1002 18:55:04.942187 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" event={"ID":"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30","Type":"ContainerStarted","Data":"e87faca70b8d7c194fd8d9277125c1cfbebabedecbc1ca1a116f3083314c072e"} Oct 02 18:55:04 crc kubenswrapper[4832]: I1002 18:55:04.965171 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" podStartSLOduration=3.345834718 podStartE2EDuration="7.965150541s" podCreationTimestamp="2025-10-02 18:54:57 +0000 UTC" firstStartedPulling="2025-10-02 18:54:58.856205103 +0000 UTC m=+2055.825647985" lastFinishedPulling="2025-10-02 18:55:03.475520946 +0000 UTC m=+2060.444963808" observedRunningTime="2025-10-02 18:55:04.957275576 +0000 UTC m=+2061.926718458" watchObservedRunningTime="2025-10-02 18:55:04.965150541 +0000 UTC m=+2061.934593423" Oct 02 18:55:10 crc kubenswrapper[4832]: I1002 18:55:10.011059 4832 generic.go:334] "Generic (PLEG): container finished" podID="06a947a2-8fbe-4cf3-84d5-cf24e83a6e30" containerID="e87faca70b8d7c194fd8d9277125c1cfbebabedecbc1ca1a116f3083314c072e" exitCode=0 Oct 02 18:55:10 crc kubenswrapper[4832]: I1002 18:55:10.011190 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" event={"ID":"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30","Type":"ContainerDied","Data":"e87faca70b8d7c194fd8d9277125c1cfbebabedecbc1ca1a116f3083314c072e"} Oct 02 18:55:11 crc kubenswrapper[4832]: I1002 18:55:11.479990 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" Oct 02 18:55:11 crc kubenswrapper[4832]: I1002 18:55:11.547203 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-ssh-key\") pod \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\" (UID: \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\") " Oct 02 18:55:11 crc kubenswrapper[4832]: I1002 18:55:11.547400 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-inventory\") pod \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\" (UID: \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\") " Oct 02 18:55:11 crc kubenswrapper[4832]: I1002 18:55:11.547630 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmjl2\" (UniqueName: \"kubernetes.io/projected/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-kube-api-access-lmjl2\") pod \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\" (UID: \"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30\") " Oct 02 18:55:11 crc kubenswrapper[4832]: I1002 18:55:11.559619 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-kube-api-access-lmjl2" (OuterVolumeSpecName: "kube-api-access-lmjl2") pod "06a947a2-8fbe-4cf3-84d5-cf24e83a6e30" (UID: "06a947a2-8fbe-4cf3-84d5-cf24e83a6e30"). InnerVolumeSpecName "kube-api-access-lmjl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:55:11 crc kubenswrapper[4832]: I1002 18:55:11.586134 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "06a947a2-8fbe-4cf3-84d5-cf24e83a6e30" (UID: "06a947a2-8fbe-4cf3-84d5-cf24e83a6e30"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:55:11 crc kubenswrapper[4832]: I1002 18:55:11.611996 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-inventory" (OuterVolumeSpecName: "inventory") pod "06a947a2-8fbe-4cf3-84d5-cf24e83a6e30" (UID: "06a947a2-8fbe-4cf3-84d5-cf24e83a6e30"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:55:11 crc kubenswrapper[4832]: I1002 18:55:11.652364 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmjl2\" (UniqueName: \"kubernetes.io/projected/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-kube-api-access-lmjl2\") on node \"crc\" DevicePath \"\"" Oct 02 18:55:11 crc kubenswrapper[4832]: I1002 18:55:11.652394 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 18:55:11 crc kubenswrapper[4832]: I1002 18:55:11.652404 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/06a947a2-8fbe-4cf3-84d5-cf24e83a6e30-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.040081 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" event={"ID":"06a947a2-8fbe-4cf3-84d5-cf24e83a6e30","Type":"ContainerDied","Data":"b643f7940a66105e5d222cb1a6acf2bce7d91c5c72d3f8b9185227299b4ef030"} Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.040397 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b643f7940a66105e5d222cb1a6acf2bce7d91c5c72d3f8b9185227299b4ef030" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.040158 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.129137 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t"] Oct 02 18:55:12 crc kubenswrapper[4832]: E1002 18:55:12.129746 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a947a2-8fbe-4cf3-84d5-cf24e83a6e30" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.129771 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a947a2-8fbe-4cf3-84d5-cf24e83a6e30" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.129987 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a947a2-8fbe-4cf3-84d5-cf24e83a6e30" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.130862 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.136509 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.136649 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.138461 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.147954 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.151210 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t"] Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.164235 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8da994a-7b15-400a-8316-27a8c28cafe1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-47m5t\" (UID: \"a8da994a-7b15-400a-8316-27a8c28cafe1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.165094 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5gtc\" (UniqueName: \"kubernetes.io/projected/a8da994a-7b15-400a-8316-27a8c28cafe1-kube-api-access-r5gtc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-47m5t\" (UID: \"a8da994a-7b15-400a-8316-27a8c28cafe1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.165335 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8da994a-7b15-400a-8316-27a8c28cafe1-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-47m5t\" (UID: \"a8da994a-7b15-400a-8316-27a8c28cafe1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.267333 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5gtc\" (UniqueName: \"kubernetes.io/projected/a8da994a-7b15-400a-8316-27a8c28cafe1-kube-api-access-r5gtc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-47m5t\" (UID: \"a8da994a-7b15-400a-8316-27a8c28cafe1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.267446 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8da994a-7b15-400a-8316-27a8c28cafe1-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-47m5t\" (UID: \"a8da994a-7b15-400a-8316-27a8c28cafe1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.267557 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8da994a-7b15-400a-8316-27a8c28cafe1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-47m5t\" (UID: \"a8da994a-7b15-400a-8316-27a8c28cafe1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.273287 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8da994a-7b15-400a-8316-27a8c28cafe1-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-47m5t\" (UID: \"a8da994a-7b15-400a-8316-27a8c28cafe1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.275386 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8da994a-7b15-400a-8316-27a8c28cafe1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-47m5t\" (UID: \"a8da994a-7b15-400a-8316-27a8c28cafe1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.286913 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5gtc\" (UniqueName: \"kubernetes.io/projected/a8da994a-7b15-400a-8316-27a8c28cafe1-kube-api-access-r5gtc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-47m5t\" (UID: \"a8da994a-7b15-400a-8316-27a8c28cafe1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" Oct 02 18:55:12 crc kubenswrapper[4832]: I1002 18:55:12.466645 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" Oct 02 18:55:13 crc kubenswrapper[4832]: I1002 18:55:13.158389 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t"] Oct 02 18:55:13 crc kubenswrapper[4832]: W1002 18:55:13.161871 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8da994a_7b15_400a_8316_27a8c28cafe1.slice/crio-fe5aa089eaedb38f08f2bb2598253dee8723386408152624815aef500cb389db WatchSource:0}: Error finding container fe5aa089eaedb38f08f2bb2598253dee8723386408152624815aef500cb389db: Status 404 returned error can't find the container with id fe5aa089eaedb38f08f2bb2598253dee8723386408152624815aef500cb389db Oct 02 18:55:14 crc kubenswrapper[4832]: I1002 18:55:14.072782 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" event={"ID":"a8da994a-7b15-400a-8316-27a8c28cafe1","Type":"ContainerStarted","Data":"a1c2159b1ea88274f979b7dad7cd772407acd2dac22f1124f6980273453b23d6"} Oct 02 18:55:14 crc kubenswrapper[4832]: I1002 18:55:14.073312 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" event={"ID":"a8da994a-7b15-400a-8316-27a8c28cafe1","Type":"ContainerStarted","Data":"fe5aa089eaedb38f08f2bb2598253dee8723386408152624815aef500cb389db"} Oct 02 18:55:14 crc kubenswrapper[4832]: I1002 18:55:14.112899 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" podStartSLOduration=1.600603325 podStartE2EDuration="2.112871623s" podCreationTimestamp="2025-10-02 18:55:12 +0000 UTC" firstStartedPulling="2025-10-02 18:55:13.167372605 +0000 UTC m=+2070.136815487" lastFinishedPulling="2025-10-02 18:55:13.679640863 +0000 UTC m=+2070.649083785" observedRunningTime="2025-10-02 18:55:14.094487801 +0000 UTC m=+2071.063930683" watchObservedRunningTime="2025-10-02 18:55:14.112871623 +0000 UTC m=+2071.082314535" Oct 02 18:55:18 crc kubenswrapper[4832]: I1002 18:55:18.250610 4832 scope.go:117] "RemoveContainer" containerID="246891ccf1730a456c880550fbb7c7c2019276914070764ba9f3d90faadbcca5" Oct 02 18:55:18 crc kubenswrapper[4832]: I1002 18:55:18.297584 4832 scope.go:117] "RemoveContainer" containerID="e19f98130721a44029df32a8019885e9a7a6092e0ba9be01c2aa810338882db8" Oct 02 18:55:18 crc kubenswrapper[4832]: I1002 18:55:18.373038 4832 scope.go:117] "RemoveContainer" containerID="a7a0b534c3b086b1e082cb966e12050644137511336a787a16e8878826e1e870" Oct 02 18:55:18 crc kubenswrapper[4832]: I1002 18:55:18.443601 4832 scope.go:117] "RemoveContainer" containerID="454f486102f5d92d3d1f6ece94db22a7debb2fcb1b00e8b35d399856a093dfc8" Oct 02 18:55:35 crc kubenswrapper[4832]: I1002 18:55:35.042510 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-lvkb2"] Oct 02 18:55:35 crc kubenswrapper[4832]: I1002 18:55:35.054661 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-lvkb2"] Oct 02 18:55:35 crc kubenswrapper[4832]: I1002 18:55:35.238534 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6400222-5886-46c9-8018-4767737c3d12" path="/var/lib/kubelet/pods/e6400222-5886-46c9-8018-4767737c3d12/volumes" Oct 02 18:55:41 crc kubenswrapper[4832]: I1002 18:55:41.341000 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wb7q4"] Oct 02 18:55:41 crc kubenswrapper[4832]: I1002 18:55:41.349938 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:55:41 crc kubenswrapper[4832]: I1002 18:55:41.355824 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wb7q4"] Oct 02 18:55:41 crc kubenswrapper[4832]: I1002 18:55:41.429192 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbccs\" (UniqueName: \"kubernetes.io/projected/84e06c39-2282-463a-b212-361a6af827a2-kube-api-access-dbccs\") pod \"community-operators-wb7q4\" (UID: \"84e06c39-2282-463a-b212-361a6af827a2\") " pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:55:41 crc kubenswrapper[4832]: I1002 18:55:41.429366 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e06c39-2282-463a-b212-361a6af827a2-utilities\") pod \"community-operators-wb7q4\" (UID: \"84e06c39-2282-463a-b212-361a6af827a2\") " pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:55:41 crc kubenswrapper[4832]: I1002 18:55:41.430152 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e06c39-2282-463a-b212-361a6af827a2-catalog-content\") pod \"community-operators-wb7q4\" (UID: \"84e06c39-2282-463a-b212-361a6af827a2\") " pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:55:41 crc kubenswrapper[4832]: I1002 18:55:41.532594 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbccs\" (UniqueName: \"kubernetes.io/projected/84e06c39-2282-463a-b212-361a6af827a2-kube-api-access-dbccs\") pod \"community-operators-wb7q4\" (UID: \"84e06c39-2282-463a-b212-361a6af827a2\") " pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:55:41 crc kubenswrapper[4832]: I1002 18:55:41.532728 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e06c39-2282-463a-b212-361a6af827a2-utilities\") pod \"community-operators-wb7q4\" (UID: \"84e06c39-2282-463a-b212-361a6af827a2\") " pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:55:41 crc kubenswrapper[4832]: I1002 18:55:41.532879 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e06c39-2282-463a-b212-361a6af827a2-catalog-content\") pod \"community-operators-wb7q4\" (UID: \"84e06c39-2282-463a-b212-361a6af827a2\") " pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:55:41 crc kubenswrapper[4832]: I1002 18:55:41.533506 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e06c39-2282-463a-b212-361a6af827a2-catalog-content\") pod \"community-operators-wb7q4\" (UID: \"84e06c39-2282-463a-b212-361a6af827a2\") " pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:55:41 crc kubenswrapper[4832]: I1002 18:55:41.533544 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e06c39-2282-463a-b212-361a6af827a2-utilities\") pod \"community-operators-wb7q4\" (UID: \"84e06c39-2282-463a-b212-361a6af827a2\") " pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:55:41 crc kubenswrapper[4832]: I1002 18:55:41.554622 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbccs\" (UniqueName: \"kubernetes.io/projected/84e06c39-2282-463a-b212-361a6af827a2-kube-api-access-dbccs\") pod \"community-operators-wb7q4\" (UID: \"84e06c39-2282-463a-b212-361a6af827a2\") " pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:55:41 crc kubenswrapper[4832]: I1002 18:55:41.723691 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:55:42 crc kubenswrapper[4832]: I1002 18:55:42.264919 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wb7q4"] Oct 02 18:55:42 crc kubenswrapper[4832]: I1002 18:55:42.443412 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb7q4" event={"ID":"84e06c39-2282-463a-b212-361a6af827a2","Type":"ContainerStarted","Data":"61d02b5e55ac551d9a72ba03b1c0f614e1867e3e08e0f38b9e6161bcbbd7cf01"} Oct 02 18:55:43 crc kubenswrapper[4832]: I1002 18:55:43.456387 4832 generic.go:334] "Generic (PLEG): container finished" podID="84e06c39-2282-463a-b212-361a6af827a2" containerID="71d23607c949581e8b6c6d00f17d9d910e5a4281827cb7a6288b205ff8b2ba5d" exitCode=0 Oct 02 18:55:43 crc kubenswrapper[4832]: I1002 18:55:43.456510 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb7q4" event={"ID":"84e06c39-2282-463a-b212-361a6af827a2","Type":"ContainerDied","Data":"71d23607c949581e8b6c6d00f17d9d910e5a4281827cb7a6288b205ff8b2ba5d"} Oct 02 18:55:45 crc kubenswrapper[4832]: I1002 18:55:45.488365 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb7q4" event={"ID":"84e06c39-2282-463a-b212-361a6af827a2","Type":"ContainerStarted","Data":"239bd1471feb9c6867bf23a1ce7207a2d09f823315413326f5c0e08fb48ee1fe"} Oct 02 18:55:47 crc kubenswrapper[4832]: I1002 18:55:47.532713 4832 generic.go:334] "Generic (PLEG): container finished" podID="84e06c39-2282-463a-b212-361a6af827a2" containerID="239bd1471feb9c6867bf23a1ce7207a2d09f823315413326f5c0e08fb48ee1fe" exitCode=0 Oct 02 18:55:47 crc kubenswrapper[4832]: I1002 18:55:47.532801 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb7q4" event={"ID":"84e06c39-2282-463a-b212-361a6af827a2","Type":"ContainerDied","Data":"239bd1471feb9c6867bf23a1ce7207a2d09f823315413326f5c0e08fb48ee1fe"} Oct 02 18:55:49 crc kubenswrapper[4832]: I1002 18:55:49.560161 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb7q4" event={"ID":"84e06c39-2282-463a-b212-361a6af827a2","Type":"ContainerStarted","Data":"e742be1e30e496162cce9b3e6d6e4349920ddc74c2439af5756d5be02f4cb19e"} Oct 02 18:55:49 crc kubenswrapper[4832]: I1002 18:55:49.601877 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wb7q4" podStartSLOduration=4.025436372 podStartE2EDuration="8.601855992s" podCreationTimestamp="2025-10-02 18:55:41 +0000 UTC" firstStartedPulling="2025-10-02 18:55:43.458741891 +0000 UTC m=+2100.428184783" lastFinishedPulling="2025-10-02 18:55:48.035161531 +0000 UTC m=+2105.004604403" observedRunningTime="2025-10-02 18:55:49.585183944 +0000 UTC m=+2106.554626846" watchObservedRunningTime="2025-10-02 18:55:49.601855992 +0000 UTC m=+2106.571298864" Oct 02 18:55:51 crc kubenswrapper[4832]: I1002 18:55:51.724821 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:55:51 crc kubenswrapper[4832]: I1002 18:55:51.725605 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:55:51 crc kubenswrapper[4832]: I1002 18:55:51.808770 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:55:52 crc kubenswrapper[4832]: I1002 18:55:52.613698 4832 generic.go:334] "Generic (PLEG): container finished" podID="a8da994a-7b15-400a-8316-27a8c28cafe1" containerID="a1c2159b1ea88274f979b7dad7cd772407acd2dac22f1124f6980273453b23d6" exitCode=0 Oct 02 18:55:52 crc kubenswrapper[4832]: I1002 18:55:52.613830 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" event={"ID":"a8da994a-7b15-400a-8316-27a8c28cafe1","Type":"ContainerDied","Data":"a1c2159b1ea88274f979b7dad7cd772407acd2dac22f1124f6980273453b23d6"} Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.317406 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.400794 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5gtc\" (UniqueName: \"kubernetes.io/projected/a8da994a-7b15-400a-8316-27a8c28cafe1-kube-api-access-r5gtc\") pod \"a8da994a-7b15-400a-8316-27a8c28cafe1\" (UID: \"a8da994a-7b15-400a-8316-27a8c28cafe1\") " Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.401502 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8da994a-7b15-400a-8316-27a8c28cafe1-ssh-key\") pod \"a8da994a-7b15-400a-8316-27a8c28cafe1\" (UID: \"a8da994a-7b15-400a-8316-27a8c28cafe1\") " Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.401759 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8da994a-7b15-400a-8316-27a8c28cafe1-inventory\") pod \"a8da994a-7b15-400a-8316-27a8c28cafe1\" (UID: \"a8da994a-7b15-400a-8316-27a8c28cafe1\") " Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.411374 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8da994a-7b15-400a-8316-27a8c28cafe1-kube-api-access-r5gtc" (OuterVolumeSpecName: "kube-api-access-r5gtc") pod "a8da994a-7b15-400a-8316-27a8c28cafe1" (UID: "a8da994a-7b15-400a-8316-27a8c28cafe1"). InnerVolumeSpecName "kube-api-access-r5gtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.439514 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8da994a-7b15-400a-8316-27a8c28cafe1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a8da994a-7b15-400a-8316-27a8c28cafe1" (UID: "a8da994a-7b15-400a-8316-27a8c28cafe1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.467190 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8da994a-7b15-400a-8316-27a8c28cafe1-inventory" (OuterVolumeSpecName: "inventory") pod "a8da994a-7b15-400a-8316-27a8c28cafe1" (UID: "a8da994a-7b15-400a-8316-27a8c28cafe1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.505857 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a8da994a-7b15-400a-8316-27a8c28cafe1-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.505902 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8da994a-7b15-400a-8316-27a8c28cafe1-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.505915 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5gtc\" (UniqueName: \"kubernetes.io/projected/a8da994a-7b15-400a-8316-27a8c28cafe1-kube-api-access-r5gtc\") on node \"crc\" DevicePath \"\"" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.641366 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" event={"ID":"a8da994a-7b15-400a-8316-27a8c28cafe1","Type":"ContainerDied","Data":"fe5aa089eaedb38f08f2bb2598253dee8723386408152624815aef500cb389db"} Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.641414 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe5aa089eaedb38f08f2bb2598253dee8723386408152624815aef500cb389db" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.641478 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-47m5t" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.846546 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg"] Oct 02 18:55:54 crc kubenswrapper[4832]: E1002 18:55:54.847579 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8da994a-7b15-400a-8316-27a8c28cafe1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.847609 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8da994a-7b15-400a-8316-27a8c28cafe1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.848042 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8da994a-7b15-400a-8316-27a8c28cafe1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.849486 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.853376 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.853597 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.853662 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.853873 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 18:55:54 crc kubenswrapper[4832]: I1002 18:55:54.858331 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg"] Oct 02 18:55:55 crc kubenswrapper[4832]: I1002 18:55:55.017076 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fznkg\" (UID: \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" Oct 02 18:55:55 crc kubenswrapper[4832]: I1002 18:55:55.017453 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b965c\" (UniqueName: \"kubernetes.io/projected/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-kube-api-access-b965c\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fznkg\" (UID: \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" Oct 02 18:55:55 crc kubenswrapper[4832]: I1002 18:55:55.017772 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fznkg\" (UID: \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" Oct 02 18:55:55 crc kubenswrapper[4832]: I1002 18:55:55.121498 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fznkg\" (UID: \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" Oct 02 18:55:55 crc kubenswrapper[4832]: I1002 18:55:55.121653 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b965c\" (UniqueName: \"kubernetes.io/projected/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-kube-api-access-b965c\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fznkg\" (UID: \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" Oct 02 18:55:55 crc kubenswrapper[4832]: I1002 18:55:55.121873 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fznkg\" (UID: \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" Oct 02 18:55:55 crc kubenswrapper[4832]: I1002 18:55:55.129163 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fznkg\" (UID: \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" Oct 02 18:55:55 crc kubenswrapper[4832]: I1002 18:55:55.130239 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fznkg\" (UID: \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" Oct 02 18:55:55 crc kubenswrapper[4832]: I1002 18:55:55.145748 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b965c\" (UniqueName: \"kubernetes.io/projected/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-kube-api-access-b965c\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fznkg\" (UID: \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" Oct 02 18:55:55 crc kubenswrapper[4832]: I1002 18:55:55.182537 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" Oct 02 18:55:55 crc kubenswrapper[4832]: W1002 18:55:55.790425 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod691f5920_3afd_4cf0_8ccb_61d2bbff10c2.slice/crio-7811d82e37297a2e3ae460dffeb7402cf5610124f907a9ba85ba90df76fc0ba1 WatchSource:0}: Error finding container 7811d82e37297a2e3ae460dffeb7402cf5610124f907a9ba85ba90df76fc0ba1: Status 404 returned error can't find the container with id 7811d82e37297a2e3ae460dffeb7402cf5610124f907a9ba85ba90df76fc0ba1 Oct 02 18:55:55 crc kubenswrapper[4832]: I1002 18:55:55.796697 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg"] Oct 02 18:55:56 crc kubenswrapper[4832]: I1002 18:55:56.667688 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" event={"ID":"691f5920-3afd-4cf0-8ccb-61d2bbff10c2","Type":"ContainerStarted","Data":"ffb3a3b794e9eb7963104c9c2cf801c37145fc3435f682a03c5b29e585065a9c"} Oct 02 18:55:56 crc kubenswrapper[4832]: I1002 18:55:56.668199 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" event={"ID":"691f5920-3afd-4cf0-8ccb-61d2bbff10c2","Type":"ContainerStarted","Data":"7811d82e37297a2e3ae460dffeb7402cf5610124f907a9ba85ba90df76fc0ba1"} Oct 02 18:56:01 crc kubenswrapper[4832]: I1002 18:56:01.802447 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:56:01 crc kubenswrapper[4832]: I1002 18:56:01.831231 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" podStartSLOduration=7.325708397 podStartE2EDuration="7.831210715s" podCreationTimestamp="2025-10-02 18:55:54 +0000 UTC" firstStartedPulling="2025-10-02 18:55:55.793240313 +0000 UTC m=+2112.762683185" lastFinishedPulling="2025-10-02 18:55:56.298742641 +0000 UTC m=+2113.268185503" observedRunningTime="2025-10-02 18:55:56.705734335 +0000 UTC m=+2113.675177207" watchObservedRunningTime="2025-10-02 18:56:01.831210715 +0000 UTC m=+2118.800653587" Oct 02 18:56:01 crc kubenswrapper[4832]: I1002 18:56:01.861195 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wb7q4"] Oct 02 18:56:02 crc kubenswrapper[4832]: I1002 18:56:02.757762 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wb7q4" podUID="84e06c39-2282-463a-b212-361a6af827a2" containerName="registry-server" containerID="cri-o://e742be1e30e496162cce9b3e6d6e4349920ddc74c2439af5756d5be02f4cb19e" gracePeriod=2 Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.237781 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.351766 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e06c39-2282-463a-b212-361a6af827a2-catalog-content\") pod \"84e06c39-2282-463a-b212-361a6af827a2\" (UID: \"84e06c39-2282-463a-b212-361a6af827a2\") " Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.351838 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e06c39-2282-463a-b212-361a6af827a2-utilities\") pod \"84e06c39-2282-463a-b212-361a6af827a2\" (UID: \"84e06c39-2282-463a-b212-361a6af827a2\") " Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.351975 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbccs\" (UniqueName: \"kubernetes.io/projected/84e06c39-2282-463a-b212-361a6af827a2-kube-api-access-dbccs\") pod \"84e06c39-2282-463a-b212-361a6af827a2\" (UID: \"84e06c39-2282-463a-b212-361a6af827a2\") " Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.353171 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84e06c39-2282-463a-b212-361a6af827a2-utilities" (OuterVolumeSpecName: "utilities") pod "84e06c39-2282-463a-b212-361a6af827a2" (UID: "84e06c39-2282-463a-b212-361a6af827a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.354324 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e06c39-2282-463a-b212-361a6af827a2-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.361817 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e06c39-2282-463a-b212-361a6af827a2-kube-api-access-dbccs" (OuterVolumeSpecName: "kube-api-access-dbccs") pod "84e06c39-2282-463a-b212-361a6af827a2" (UID: "84e06c39-2282-463a-b212-361a6af827a2"). InnerVolumeSpecName "kube-api-access-dbccs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.407858 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84e06c39-2282-463a-b212-361a6af827a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84e06c39-2282-463a-b212-361a6af827a2" (UID: "84e06c39-2282-463a-b212-361a6af827a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.457221 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e06c39-2282-463a-b212-361a6af827a2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.457298 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbccs\" (UniqueName: \"kubernetes.io/projected/84e06c39-2282-463a-b212-361a6af827a2-kube-api-access-dbccs\") on node \"crc\" DevicePath \"\"" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.780671 4832 generic.go:334] "Generic (PLEG): container finished" podID="84e06c39-2282-463a-b212-361a6af827a2" containerID="e742be1e30e496162cce9b3e6d6e4349920ddc74c2439af5756d5be02f4cb19e" exitCode=0 Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.780789 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb7q4" event={"ID":"84e06c39-2282-463a-b212-361a6af827a2","Type":"ContainerDied","Data":"e742be1e30e496162cce9b3e6d6e4349920ddc74c2439af5756d5be02f4cb19e"} Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.780853 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wb7q4" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.781106 4832 scope.go:117] "RemoveContainer" containerID="e742be1e30e496162cce9b3e6d6e4349920ddc74c2439af5756d5be02f4cb19e" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.781079 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wb7q4" event={"ID":"84e06c39-2282-463a-b212-361a6af827a2","Type":"ContainerDied","Data":"61d02b5e55ac551d9a72ba03b1c0f614e1867e3e08e0f38b9e6161bcbbd7cf01"} Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.818016 4832 scope.go:117] "RemoveContainer" containerID="239bd1471feb9c6867bf23a1ce7207a2d09f823315413326f5c0e08fb48ee1fe" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.821222 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wb7q4"] Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.829977 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wb7q4"] Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.844834 4832 scope.go:117] "RemoveContainer" containerID="71d23607c949581e8b6c6d00f17d9d910e5a4281827cb7a6288b205ff8b2ba5d" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.898564 4832 scope.go:117] "RemoveContainer" containerID="e742be1e30e496162cce9b3e6d6e4349920ddc74c2439af5756d5be02f4cb19e" Oct 02 18:56:03 crc kubenswrapper[4832]: E1002 18:56:03.899253 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e742be1e30e496162cce9b3e6d6e4349920ddc74c2439af5756d5be02f4cb19e\": container with ID starting with e742be1e30e496162cce9b3e6d6e4349920ddc74c2439af5756d5be02f4cb19e not found: ID does not exist" containerID="e742be1e30e496162cce9b3e6d6e4349920ddc74c2439af5756d5be02f4cb19e" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.899339 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e742be1e30e496162cce9b3e6d6e4349920ddc74c2439af5756d5be02f4cb19e"} err="failed to get container status \"e742be1e30e496162cce9b3e6d6e4349920ddc74c2439af5756d5be02f4cb19e\": rpc error: code = NotFound desc = could not find container \"e742be1e30e496162cce9b3e6d6e4349920ddc74c2439af5756d5be02f4cb19e\": container with ID starting with e742be1e30e496162cce9b3e6d6e4349920ddc74c2439af5756d5be02f4cb19e not found: ID does not exist" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.899380 4832 scope.go:117] "RemoveContainer" containerID="239bd1471feb9c6867bf23a1ce7207a2d09f823315413326f5c0e08fb48ee1fe" Oct 02 18:56:03 crc kubenswrapper[4832]: E1002 18:56:03.900082 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"239bd1471feb9c6867bf23a1ce7207a2d09f823315413326f5c0e08fb48ee1fe\": container with ID starting with 239bd1471feb9c6867bf23a1ce7207a2d09f823315413326f5c0e08fb48ee1fe not found: ID does not exist" containerID="239bd1471feb9c6867bf23a1ce7207a2d09f823315413326f5c0e08fb48ee1fe" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.900153 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"239bd1471feb9c6867bf23a1ce7207a2d09f823315413326f5c0e08fb48ee1fe"} err="failed to get container status \"239bd1471feb9c6867bf23a1ce7207a2d09f823315413326f5c0e08fb48ee1fe\": rpc error: code = NotFound desc = could not find container \"239bd1471feb9c6867bf23a1ce7207a2d09f823315413326f5c0e08fb48ee1fe\": container with ID starting with 239bd1471feb9c6867bf23a1ce7207a2d09f823315413326f5c0e08fb48ee1fe not found: ID does not exist" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.900201 4832 scope.go:117] "RemoveContainer" containerID="71d23607c949581e8b6c6d00f17d9d910e5a4281827cb7a6288b205ff8b2ba5d" Oct 02 18:56:03 crc kubenswrapper[4832]: E1002 18:56:03.900632 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71d23607c949581e8b6c6d00f17d9d910e5a4281827cb7a6288b205ff8b2ba5d\": container with ID starting with 71d23607c949581e8b6c6d00f17d9d910e5a4281827cb7a6288b205ff8b2ba5d not found: ID does not exist" containerID="71d23607c949581e8b6c6d00f17d9d910e5a4281827cb7a6288b205ff8b2ba5d" Oct 02 18:56:03 crc kubenswrapper[4832]: I1002 18:56:03.900679 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d23607c949581e8b6c6d00f17d9d910e5a4281827cb7a6288b205ff8b2ba5d"} err="failed to get container status \"71d23607c949581e8b6c6d00f17d9d910e5a4281827cb7a6288b205ff8b2ba5d\": rpc error: code = NotFound desc = could not find container \"71d23607c949581e8b6c6d00f17d9d910e5a4281827cb7a6288b205ff8b2ba5d\": container with ID starting with 71d23607c949581e8b6c6d00f17d9d910e5a4281827cb7a6288b205ff8b2ba5d not found: ID does not exist" Oct 02 18:56:05 crc kubenswrapper[4832]: I1002 18:56:05.248400 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e06c39-2282-463a-b212-361a6af827a2" path="/var/lib/kubelet/pods/84e06c39-2282-463a-b212-361a6af827a2/volumes" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.436038 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pqrt4"] Oct 02 18:56:15 crc kubenswrapper[4832]: E1002 18:56:15.437663 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e06c39-2282-463a-b212-361a6af827a2" containerName="extract-utilities" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.437689 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e06c39-2282-463a-b212-361a6af827a2" containerName="extract-utilities" Oct 02 18:56:15 crc kubenswrapper[4832]: E1002 18:56:15.437742 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e06c39-2282-463a-b212-361a6af827a2" containerName="extract-content" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.437755 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e06c39-2282-463a-b212-361a6af827a2" containerName="extract-content" Oct 02 18:56:15 crc kubenswrapper[4832]: E1002 18:56:15.437815 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e06c39-2282-463a-b212-361a6af827a2" containerName="registry-server" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.437825 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e06c39-2282-463a-b212-361a6af827a2" containerName="registry-server" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.438095 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e06c39-2282-463a-b212-361a6af827a2" containerName="registry-server" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.440341 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.444956 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pqrt4"] Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.496231 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zktnz\" (UniqueName: \"kubernetes.io/projected/b618281a-5e3b-4b9b-a0c3-14f81152bccd-kube-api-access-zktnz\") pod \"certified-operators-pqrt4\" (UID: \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\") " pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.496338 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b618281a-5e3b-4b9b-a0c3-14f81152bccd-utilities\") pod \"certified-operators-pqrt4\" (UID: \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\") " pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.496466 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b618281a-5e3b-4b9b-a0c3-14f81152bccd-catalog-content\") pod \"certified-operators-pqrt4\" (UID: \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\") " pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.599182 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zktnz\" (UniqueName: \"kubernetes.io/projected/b618281a-5e3b-4b9b-a0c3-14f81152bccd-kube-api-access-zktnz\") pod \"certified-operators-pqrt4\" (UID: \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\") " pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.599285 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b618281a-5e3b-4b9b-a0c3-14f81152bccd-utilities\") pod \"certified-operators-pqrt4\" (UID: \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\") " pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.599434 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b618281a-5e3b-4b9b-a0c3-14f81152bccd-catalog-content\") pod \"certified-operators-pqrt4\" (UID: \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\") " pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.599894 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b618281a-5e3b-4b9b-a0c3-14f81152bccd-utilities\") pod \"certified-operators-pqrt4\" (UID: \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\") " pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.599904 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b618281a-5e3b-4b9b-a0c3-14f81152bccd-catalog-content\") pod \"certified-operators-pqrt4\" (UID: \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\") " pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.626249 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zktnz\" (UniqueName: \"kubernetes.io/projected/b618281a-5e3b-4b9b-a0c3-14f81152bccd-kube-api-access-zktnz\") pod \"certified-operators-pqrt4\" (UID: \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\") " pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:15 crc kubenswrapper[4832]: I1002 18:56:15.778397 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:16 crc kubenswrapper[4832]: I1002 18:56:16.274798 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pqrt4"] Oct 02 18:56:16 crc kubenswrapper[4832]: I1002 18:56:16.961214 4832 generic.go:334] "Generic (PLEG): container finished" podID="b618281a-5e3b-4b9b-a0c3-14f81152bccd" containerID="c0931541b8818bef8a4544c93e065372a5b4ef7eaf69dc548dbb26d4e1f6560c" exitCode=0 Oct 02 18:56:16 crc kubenswrapper[4832]: I1002 18:56:16.961273 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqrt4" event={"ID":"b618281a-5e3b-4b9b-a0c3-14f81152bccd","Type":"ContainerDied","Data":"c0931541b8818bef8a4544c93e065372a5b4ef7eaf69dc548dbb26d4e1f6560c"} Oct 02 18:56:16 crc kubenswrapper[4832]: I1002 18:56:16.961537 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqrt4" event={"ID":"b618281a-5e3b-4b9b-a0c3-14f81152bccd","Type":"ContainerStarted","Data":"1bae0716e0d0b9876a84273002fc7f65e3c0127f7180387c262936686bfa4059"} Oct 02 18:56:18 crc kubenswrapper[4832]: I1002 18:56:18.593585 4832 scope.go:117] "RemoveContainer" containerID="8f6060e1226594f5c9c4e91fed95b13761c9c5727a71c44f8e43a6fdf31695cc" Oct 02 18:56:18 crc kubenswrapper[4832]: I1002 18:56:18.999253 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqrt4" event={"ID":"b618281a-5e3b-4b9b-a0c3-14f81152bccd","Type":"ContainerStarted","Data":"8ba2f1f0928a281e6724e3128d1d7e6d74d5c379d28400d69bac0c4e86448cc6"} Oct 02 18:56:21 crc kubenswrapper[4832]: I1002 18:56:21.027028 4832 generic.go:334] "Generic (PLEG): container finished" podID="b618281a-5e3b-4b9b-a0c3-14f81152bccd" containerID="8ba2f1f0928a281e6724e3128d1d7e6d74d5c379d28400d69bac0c4e86448cc6" exitCode=0 Oct 02 18:56:21 crc kubenswrapper[4832]: I1002 18:56:21.027156 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqrt4" event={"ID":"b618281a-5e3b-4b9b-a0c3-14f81152bccd","Type":"ContainerDied","Data":"8ba2f1f0928a281e6724e3128d1d7e6d74d5c379d28400d69bac0c4e86448cc6"} Oct 02 18:56:23 crc kubenswrapper[4832]: I1002 18:56:23.066178 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqrt4" event={"ID":"b618281a-5e3b-4b9b-a0c3-14f81152bccd","Type":"ContainerStarted","Data":"e404d814c609cf8b3e6e736095a82895346f8e7b586c4d2aa8525234c06eed59"} Oct 02 18:56:23 crc kubenswrapper[4832]: I1002 18:56:23.096231 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pqrt4" podStartSLOduration=3.242070439 podStartE2EDuration="8.096165432s" podCreationTimestamp="2025-10-02 18:56:15 +0000 UTC" firstStartedPulling="2025-10-02 18:56:16.96558053 +0000 UTC m=+2133.935023402" lastFinishedPulling="2025-10-02 18:56:21.819675513 +0000 UTC m=+2138.789118395" observedRunningTime="2025-10-02 18:56:23.093893231 +0000 UTC m=+2140.063336123" watchObservedRunningTime="2025-10-02 18:56:23.096165432 +0000 UTC m=+2140.065608374" Oct 02 18:56:25 crc kubenswrapper[4832]: I1002 18:56:25.778700 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:25 crc kubenswrapper[4832]: I1002 18:56:25.778996 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:25 crc kubenswrapper[4832]: I1002 18:56:25.871864 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:26 crc kubenswrapper[4832]: I1002 18:56:26.874973 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:56:26 crc kubenswrapper[4832]: I1002 18:56:26.875031 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:56:35 crc kubenswrapper[4832]: I1002 18:56:35.846518 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:39 crc kubenswrapper[4832]: I1002 18:56:39.085346 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wchsb"] Oct 02 18:56:39 crc kubenswrapper[4832]: I1002 18:56:39.093976 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:56:39 crc kubenswrapper[4832]: I1002 18:56:39.130374 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wchsb"] Oct 02 18:56:39 crc kubenswrapper[4832]: I1002 18:56:39.175372 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnfd4\" (UniqueName: \"kubernetes.io/projected/a0899c70-4625-43e3-985d-7c80da1c8f6c-kube-api-access-gnfd4\") pod \"redhat-operators-wchsb\" (UID: \"a0899c70-4625-43e3-985d-7c80da1c8f6c\") " pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:56:39 crc kubenswrapper[4832]: I1002 18:56:39.175546 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0899c70-4625-43e3-985d-7c80da1c8f6c-utilities\") pod \"redhat-operators-wchsb\" (UID: \"a0899c70-4625-43e3-985d-7c80da1c8f6c\") " pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:56:39 crc kubenswrapper[4832]: I1002 18:56:39.175690 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0899c70-4625-43e3-985d-7c80da1c8f6c-catalog-content\") pod \"redhat-operators-wchsb\" (UID: \"a0899c70-4625-43e3-985d-7c80da1c8f6c\") " pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:56:39 crc kubenswrapper[4832]: I1002 18:56:39.281799 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0899c70-4625-43e3-985d-7c80da1c8f6c-utilities\") pod \"redhat-operators-wchsb\" (UID: \"a0899c70-4625-43e3-985d-7c80da1c8f6c\") " pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:56:39 crc kubenswrapper[4832]: I1002 18:56:39.282341 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0899c70-4625-43e3-985d-7c80da1c8f6c-catalog-content\") pod \"redhat-operators-wchsb\" (UID: \"a0899c70-4625-43e3-985d-7c80da1c8f6c\") " pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:56:39 crc kubenswrapper[4832]: I1002 18:56:39.282659 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0899c70-4625-43e3-985d-7c80da1c8f6c-utilities\") pod \"redhat-operators-wchsb\" (UID: \"a0899c70-4625-43e3-985d-7c80da1c8f6c\") " pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:56:39 crc kubenswrapper[4832]: I1002 18:56:39.283028 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0899c70-4625-43e3-985d-7c80da1c8f6c-catalog-content\") pod \"redhat-operators-wchsb\" (UID: \"a0899c70-4625-43e3-985d-7c80da1c8f6c\") " pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:56:39 crc kubenswrapper[4832]: I1002 18:56:39.283643 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnfd4\" (UniqueName: \"kubernetes.io/projected/a0899c70-4625-43e3-985d-7c80da1c8f6c-kube-api-access-gnfd4\") pod \"redhat-operators-wchsb\" (UID: \"a0899c70-4625-43e3-985d-7c80da1c8f6c\") " pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:56:39 crc kubenswrapper[4832]: I1002 18:56:39.307900 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnfd4\" (UniqueName: \"kubernetes.io/projected/a0899c70-4625-43e3-985d-7c80da1c8f6c-kube-api-access-gnfd4\") pod \"redhat-operators-wchsb\" (UID: \"a0899c70-4625-43e3-985d-7c80da1c8f6c\") " pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:56:39 crc kubenswrapper[4832]: I1002 18:56:39.432438 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:56:39 crc kubenswrapper[4832]: I1002 18:56:39.929488 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wchsb"] Oct 02 18:56:39 crc kubenswrapper[4832]: W1002 18:56:39.932376 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0899c70_4625_43e3_985d_7c80da1c8f6c.slice/crio-684a907c2ef663d8edbc820a9cd3abefedfb2e4b41c4522bb03b65a8b4183aff WatchSource:0}: Error finding container 684a907c2ef663d8edbc820a9cd3abefedfb2e4b41c4522bb03b65a8b4183aff: Status 404 returned error can't find the container with id 684a907c2ef663d8edbc820a9cd3abefedfb2e4b41c4522bb03b65a8b4183aff Oct 02 18:56:40 crc kubenswrapper[4832]: I1002 18:56:40.281779 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wchsb" event={"ID":"a0899c70-4625-43e3-985d-7c80da1c8f6c","Type":"ContainerStarted","Data":"684a907c2ef663d8edbc820a9cd3abefedfb2e4b41c4522bb03b65a8b4183aff"} Oct 02 18:56:41 crc kubenswrapper[4832]: I1002 18:56:41.294433 4832 generic.go:334] "Generic (PLEG): container finished" podID="a0899c70-4625-43e3-985d-7c80da1c8f6c" containerID="6d2fc5c8fcc3f7e21609603307f9b59c8f41d613ab40aea0bcb745ba96eda0b9" exitCode=0 Oct 02 18:56:41 crc kubenswrapper[4832]: I1002 18:56:41.294505 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wchsb" event={"ID":"a0899c70-4625-43e3-985d-7c80da1c8f6c","Type":"ContainerDied","Data":"6d2fc5c8fcc3f7e21609603307f9b59c8f41d613ab40aea0bcb745ba96eda0b9"} Oct 02 18:56:43 crc kubenswrapper[4832]: I1002 18:56:43.872980 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pqrt4"] Oct 02 18:56:43 crc kubenswrapper[4832]: I1002 18:56:43.873684 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pqrt4" podUID="b618281a-5e3b-4b9b-a0c3-14f81152bccd" containerName="registry-server" containerID="cri-o://e404d814c609cf8b3e6e736095a82895346f8e7b586c4d2aa8525234c06eed59" gracePeriod=2 Oct 02 18:56:44 crc kubenswrapper[4832]: I1002 18:56:44.354766 4832 generic.go:334] "Generic (PLEG): container finished" podID="b618281a-5e3b-4b9b-a0c3-14f81152bccd" containerID="e404d814c609cf8b3e6e736095a82895346f8e7b586c4d2aa8525234c06eed59" exitCode=0 Oct 02 18:56:44 crc kubenswrapper[4832]: I1002 18:56:44.355097 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqrt4" event={"ID":"b618281a-5e3b-4b9b-a0c3-14f81152bccd","Type":"ContainerDied","Data":"e404d814c609cf8b3e6e736095a82895346f8e7b586c4d2aa8525234c06eed59"} Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.344969 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.372189 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wchsb" event={"ID":"a0899c70-4625-43e3-985d-7c80da1c8f6c","Type":"ContainerStarted","Data":"d3d2c3110470eff21da6af4cd9291dc6fe63e4178f245b1f11bc80a71e6277ac"} Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.385331 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pqrt4" event={"ID":"b618281a-5e3b-4b9b-a0c3-14f81152bccd","Type":"ContainerDied","Data":"1bae0716e0d0b9876a84273002fc7f65e3c0127f7180387c262936686bfa4059"} Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.385901 4832 scope.go:117] "RemoveContainer" containerID="e404d814c609cf8b3e6e736095a82895346f8e7b586c4d2aa8525234c06eed59" Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.386074 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pqrt4" Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.433070 4832 scope.go:117] "RemoveContainer" containerID="8ba2f1f0928a281e6724e3128d1d7e6d74d5c379d28400d69bac0c4e86448cc6" Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.460888 4832 scope.go:117] "RemoveContainer" containerID="c0931541b8818bef8a4544c93e065372a5b4ef7eaf69dc548dbb26d4e1f6560c" Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.482878 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b618281a-5e3b-4b9b-a0c3-14f81152bccd-utilities\") pod \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\" (UID: \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\") " Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.483155 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b618281a-5e3b-4b9b-a0c3-14f81152bccd-catalog-content\") pod \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\" (UID: \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\") " Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.483497 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zktnz\" (UniqueName: \"kubernetes.io/projected/b618281a-5e3b-4b9b-a0c3-14f81152bccd-kube-api-access-zktnz\") pod \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\" (UID: \"b618281a-5e3b-4b9b-a0c3-14f81152bccd\") " Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.484411 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b618281a-5e3b-4b9b-a0c3-14f81152bccd-utilities" (OuterVolumeSpecName: "utilities") pod "b618281a-5e3b-4b9b-a0c3-14f81152bccd" (UID: "b618281a-5e3b-4b9b-a0c3-14f81152bccd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.492545 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b618281a-5e3b-4b9b-a0c3-14f81152bccd-kube-api-access-zktnz" (OuterVolumeSpecName: "kube-api-access-zktnz") pod "b618281a-5e3b-4b9b-a0c3-14f81152bccd" (UID: "b618281a-5e3b-4b9b-a0c3-14f81152bccd"). InnerVolumeSpecName "kube-api-access-zktnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.528223 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b618281a-5e3b-4b9b-a0c3-14f81152bccd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b618281a-5e3b-4b9b-a0c3-14f81152bccd" (UID: "b618281a-5e3b-4b9b-a0c3-14f81152bccd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.587514 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zktnz\" (UniqueName: \"kubernetes.io/projected/b618281a-5e3b-4b9b-a0c3-14f81152bccd-kube-api-access-zktnz\") on node \"crc\" DevicePath \"\"" Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.587562 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b618281a-5e3b-4b9b-a0c3-14f81152bccd-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.587581 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b618281a-5e3b-4b9b-a0c3-14f81152bccd-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.729756 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pqrt4"] Oct 02 18:56:45 crc kubenswrapper[4832]: I1002 18:56:45.738845 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pqrt4"] Oct 02 18:56:47 crc kubenswrapper[4832]: I1002 18:56:47.247954 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b618281a-5e3b-4b9b-a0c3-14f81152bccd" path="/var/lib/kubelet/pods/b618281a-5e3b-4b9b-a0c3-14f81152bccd/volumes" Oct 02 18:56:56 crc kubenswrapper[4832]: I1002 18:56:56.875347 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:56:56 crc kubenswrapper[4832]: I1002 18:56:56.876441 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:56:57 crc kubenswrapper[4832]: I1002 18:56:57.534577 4832 generic.go:334] "Generic (PLEG): container finished" podID="691f5920-3afd-4cf0-8ccb-61d2bbff10c2" containerID="ffb3a3b794e9eb7963104c9c2cf801c37145fc3435f682a03c5b29e585065a9c" exitCode=2 Oct 02 18:56:57 crc kubenswrapper[4832]: I1002 18:56:57.534649 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" event={"ID":"691f5920-3afd-4cf0-8ccb-61d2bbff10c2","Type":"ContainerDied","Data":"ffb3a3b794e9eb7963104c9c2cf801c37145fc3435f682a03c5b29e585065a9c"} Oct 02 18:56:59 crc kubenswrapper[4832]: I1002 18:56:59.560198 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" event={"ID":"691f5920-3afd-4cf0-8ccb-61d2bbff10c2","Type":"ContainerDied","Data":"7811d82e37297a2e3ae460dffeb7402cf5610124f907a9ba85ba90df76fc0ba1"} Oct 02 18:56:59 crc kubenswrapper[4832]: I1002 18:56:59.560847 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7811d82e37297a2e3ae460dffeb7402cf5610124f907a9ba85ba90df76fc0ba1" Oct 02 18:56:59 crc kubenswrapper[4832]: I1002 18:56:59.560548 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" Oct 02 18:56:59 crc kubenswrapper[4832]: I1002 18:56:59.681359 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-ssh-key\") pod \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\" (UID: \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\") " Oct 02 18:56:59 crc kubenswrapper[4832]: I1002 18:56:59.681442 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b965c\" (UniqueName: \"kubernetes.io/projected/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-kube-api-access-b965c\") pod \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\" (UID: \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\") " Oct 02 18:56:59 crc kubenswrapper[4832]: I1002 18:56:59.681729 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-inventory\") pod \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\" (UID: \"691f5920-3afd-4cf0-8ccb-61d2bbff10c2\") " Oct 02 18:56:59 crc kubenswrapper[4832]: I1002 18:56:59.695558 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-kube-api-access-b965c" (OuterVolumeSpecName: "kube-api-access-b965c") pod "691f5920-3afd-4cf0-8ccb-61d2bbff10c2" (UID: "691f5920-3afd-4cf0-8ccb-61d2bbff10c2"). InnerVolumeSpecName "kube-api-access-b965c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:56:59 crc kubenswrapper[4832]: I1002 18:56:59.719547 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "691f5920-3afd-4cf0-8ccb-61d2bbff10c2" (UID: "691f5920-3afd-4cf0-8ccb-61d2bbff10c2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:56:59 crc kubenswrapper[4832]: I1002 18:56:59.737055 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-inventory" (OuterVolumeSpecName: "inventory") pod "691f5920-3afd-4cf0-8ccb-61d2bbff10c2" (UID: "691f5920-3afd-4cf0-8ccb-61d2bbff10c2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:56:59 crc kubenswrapper[4832]: I1002 18:56:59.784664 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 18:56:59 crc kubenswrapper[4832]: I1002 18:56:59.784704 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b965c\" (UniqueName: \"kubernetes.io/projected/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-kube-api-access-b965c\") on node \"crc\" DevicePath \"\"" Oct 02 18:56:59 crc kubenswrapper[4832]: I1002 18:56:59.784715 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/691f5920-3afd-4cf0-8ccb-61d2bbff10c2-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 18:57:00 crc kubenswrapper[4832]: I1002 18:57:00.570065 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fznkg" Oct 02 18:57:01 crc kubenswrapper[4832]: I1002 18:57:01.589930 4832 generic.go:334] "Generic (PLEG): container finished" podID="a0899c70-4625-43e3-985d-7c80da1c8f6c" containerID="d3d2c3110470eff21da6af4cd9291dc6fe63e4178f245b1f11bc80a71e6277ac" exitCode=0 Oct 02 18:57:01 crc kubenswrapper[4832]: I1002 18:57:01.589974 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wchsb" event={"ID":"a0899c70-4625-43e3-985d-7c80da1c8f6c","Type":"ContainerDied","Data":"d3d2c3110470eff21da6af4cd9291dc6fe63e4178f245b1f11bc80a71e6277ac"} Oct 02 18:57:03 crc kubenswrapper[4832]: I1002 18:57:03.616248 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wchsb" event={"ID":"a0899c70-4625-43e3-985d-7c80da1c8f6c","Type":"ContainerStarted","Data":"957c8caad340bb480d22092e567c2573fb488752d244121079dbf42374b81348"} Oct 02 18:57:03 crc kubenswrapper[4832]: I1002 18:57:03.646118 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wchsb" podStartSLOduration=3.655130847 podStartE2EDuration="24.646095936s" podCreationTimestamp="2025-10-02 18:56:39 +0000 UTC" firstStartedPulling="2025-10-02 18:56:41.296594527 +0000 UTC m=+2158.266037409" lastFinishedPulling="2025-10-02 18:57:02.287559586 +0000 UTC m=+2179.257002498" observedRunningTime="2025-10-02 18:57:03.640820791 +0000 UTC m=+2180.610263663" watchObservedRunningTime="2025-10-02 18:57:03.646095936 +0000 UTC m=+2180.615538808" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.039256 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr"] Oct 02 18:57:07 crc kubenswrapper[4832]: E1002 18:57:07.040189 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b618281a-5e3b-4b9b-a0c3-14f81152bccd" containerName="extract-utilities" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.040210 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b618281a-5e3b-4b9b-a0c3-14f81152bccd" containerName="extract-utilities" Oct 02 18:57:07 crc kubenswrapper[4832]: E1002 18:57:07.040224 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b618281a-5e3b-4b9b-a0c3-14f81152bccd" containerName="registry-server" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.040230 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b618281a-5e3b-4b9b-a0c3-14f81152bccd" containerName="registry-server" Oct 02 18:57:07 crc kubenswrapper[4832]: E1002 18:57:07.040292 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b618281a-5e3b-4b9b-a0c3-14f81152bccd" containerName="extract-content" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.040300 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b618281a-5e3b-4b9b-a0c3-14f81152bccd" containerName="extract-content" Oct 02 18:57:07 crc kubenswrapper[4832]: E1002 18:57:07.040314 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691f5920-3afd-4cf0-8ccb-61d2bbff10c2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.040321 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="691f5920-3afd-4cf0-8ccb-61d2bbff10c2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.040546 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b618281a-5e3b-4b9b-a0c3-14f81152bccd" containerName="registry-server" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.040573 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="691f5920-3afd-4cf0-8ccb-61d2bbff10c2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.041478 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.043786 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.044445 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.044988 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.045093 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.046309 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr"] Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.176458 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b8c6e59-47c8-4051-a398-3f3d6739d15d-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xtftr\" (UID: \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.176999 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b8c6e59-47c8-4051-a398-3f3d6739d15d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xtftr\" (UID: \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.177057 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfw5n\" (UniqueName: \"kubernetes.io/projected/8b8c6e59-47c8-4051-a398-3f3d6739d15d-kube-api-access-sfw5n\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xtftr\" (UID: \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.279857 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b8c6e59-47c8-4051-a398-3f3d6739d15d-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xtftr\" (UID: \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.279968 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b8c6e59-47c8-4051-a398-3f3d6739d15d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xtftr\" (UID: \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.280000 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfw5n\" (UniqueName: \"kubernetes.io/projected/8b8c6e59-47c8-4051-a398-3f3d6739d15d-kube-api-access-sfw5n\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xtftr\" (UID: \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.287379 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b8c6e59-47c8-4051-a398-3f3d6739d15d-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xtftr\" (UID: \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.287588 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b8c6e59-47c8-4051-a398-3f3d6739d15d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xtftr\" (UID: \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.305705 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfw5n\" (UniqueName: \"kubernetes.io/projected/8b8c6e59-47c8-4051-a398-3f3d6739d15d-kube-api-access-sfw5n\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-xtftr\" (UID: \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" Oct 02 18:57:07 crc kubenswrapper[4832]: I1002 18:57:07.387127 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" Oct 02 18:57:08 crc kubenswrapper[4832]: I1002 18:57:08.400253 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr"] Oct 02 18:57:08 crc kubenswrapper[4832]: I1002 18:57:08.679725 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" event={"ID":"8b8c6e59-47c8-4051-a398-3f3d6739d15d","Type":"ContainerStarted","Data":"4a1a7cd1ab29898a8b8f7ef757cae8ac401f737a89c1f71f28934c82f33f68c0"} Oct 02 18:57:09 crc kubenswrapper[4832]: I1002 18:57:09.433541 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:57:09 crc kubenswrapper[4832]: I1002 18:57:09.434051 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:57:09 crc kubenswrapper[4832]: I1002 18:57:09.691565 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" event={"ID":"8b8c6e59-47c8-4051-a398-3f3d6739d15d","Type":"ContainerStarted","Data":"a177d6793cc7ba1b5b2912675750505532ef2ee9f50a53ab01af33d7fa6bd43f"} Oct 02 18:57:09 crc kubenswrapper[4832]: I1002 18:57:09.715811 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" podStartSLOduration=2.222531619 podStartE2EDuration="2.715789125s" podCreationTimestamp="2025-10-02 18:57:07 +0000 UTC" firstStartedPulling="2025-10-02 18:57:08.40595292 +0000 UTC m=+2185.375395792" lastFinishedPulling="2025-10-02 18:57:08.899210396 +0000 UTC m=+2185.868653298" observedRunningTime="2025-10-02 18:57:09.710040907 +0000 UTC m=+2186.679483789" watchObservedRunningTime="2025-10-02 18:57:09.715789125 +0000 UTC m=+2186.685232007" Oct 02 18:57:10 crc kubenswrapper[4832]: I1002 18:57:10.511136 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wchsb" podUID="a0899c70-4625-43e3-985d-7c80da1c8f6c" containerName="registry-server" probeResult="failure" output=< Oct 02 18:57:10 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 18:57:10 crc kubenswrapper[4832]: > Oct 02 18:57:20 crc kubenswrapper[4832]: I1002 18:57:20.502290 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wchsb" podUID="a0899c70-4625-43e3-985d-7c80da1c8f6c" containerName="registry-server" probeResult="failure" output=< Oct 02 18:57:20 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 18:57:20 crc kubenswrapper[4832]: > Oct 02 18:57:26 crc kubenswrapper[4832]: I1002 18:57:26.875082 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:57:26 crc kubenswrapper[4832]: I1002 18:57:26.876466 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 18:57:26 crc kubenswrapper[4832]: I1002 18:57:26.876537 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 18:57:26 crc kubenswrapper[4832]: I1002 18:57:26.877790 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bad383d75da31b854d1e8d51851deee9c385d51d3a1bd396750d2fce236862ee"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 18:57:26 crc kubenswrapper[4832]: I1002 18:57:26.877891 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://bad383d75da31b854d1e8d51851deee9c385d51d3a1bd396750d2fce236862ee" gracePeriod=600 Oct 02 18:57:27 crc kubenswrapper[4832]: I1002 18:57:27.918416 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="bad383d75da31b854d1e8d51851deee9c385d51d3a1bd396750d2fce236862ee" exitCode=0 Oct 02 18:57:27 crc kubenswrapper[4832]: I1002 18:57:27.918484 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"bad383d75da31b854d1e8d51851deee9c385d51d3a1bd396750d2fce236862ee"} Oct 02 18:57:27 crc kubenswrapper[4832]: I1002 18:57:27.918917 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67"} Oct 02 18:57:27 crc kubenswrapper[4832]: I1002 18:57:27.918957 4832 scope.go:117] "RemoveContainer" containerID="eded9c3e3aeb0cea5df4506bb2142b87a9f14358321cef58967a4db84d8e3b9c" Oct 02 18:57:29 crc kubenswrapper[4832]: I1002 18:57:29.563739 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:57:29 crc kubenswrapper[4832]: I1002 18:57:29.634090 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:57:29 crc kubenswrapper[4832]: I1002 18:57:29.806437 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wchsb"] Oct 02 18:57:30 crc kubenswrapper[4832]: I1002 18:57:30.954664 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wchsb" podUID="a0899c70-4625-43e3-985d-7c80da1c8f6c" containerName="registry-server" containerID="cri-o://957c8caad340bb480d22092e567c2573fb488752d244121079dbf42374b81348" gracePeriod=2 Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.665895 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.735477 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnfd4\" (UniqueName: \"kubernetes.io/projected/a0899c70-4625-43e3-985d-7c80da1c8f6c-kube-api-access-gnfd4\") pod \"a0899c70-4625-43e3-985d-7c80da1c8f6c\" (UID: \"a0899c70-4625-43e3-985d-7c80da1c8f6c\") " Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.736053 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0899c70-4625-43e3-985d-7c80da1c8f6c-catalog-content\") pod \"a0899c70-4625-43e3-985d-7c80da1c8f6c\" (UID: \"a0899c70-4625-43e3-985d-7c80da1c8f6c\") " Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.736111 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0899c70-4625-43e3-985d-7c80da1c8f6c-utilities\") pod \"a0899c70-4625-43e3-985d-7c80da1c8f6c\" (UID: \"a0899c70-4625-43e3-985d-7c80da1c8f6c\") " Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.736813 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0899c70-4625-43e3-985d-7c80da1c8f6c-utilities" (OuterVolumeSpecName: "utilities") pod "a0899c70-4625-43e3-985d-7c80da1c8f6c" (UID: "a0899c70-4625-43e3-985d-7c80da1c8f6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.741947 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0899c70-4625-43e3-985d-7c80da1c8f6c-kube-api-access-gnfd4" (OuterVolumeSpecName: "kube-api-access-gnfd4") pod "a0899c70-4625-43e3-985d-7c80da1c8f6c" (UID: "a0899c70-4625-43e3-985d-7c80da1c8f6c"). InnerVolumeSpecName "kube-api-access-gnfd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.834710 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0899c70-4625-43e3-985d-7c80da1c8f6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0899c70-4625-43e3-985d-7c80da1c8f6c" (UID: "a0899c70-4625-43e3-985d-7c80da1c8f6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.838666 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0899c70-4625-43e3-985d-7c80da1c8f6c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.838703 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0899c70-4625-43e3-985d-7c80da1c8f6c-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.838716 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnfd4\" (UniqueName: \"kubernetes.io/projected/a0899c70-4625-43e3-985d-7c80da1c8f6c-kube-api-access-gnfd4\") on node \"crc\" DevicePath \"\"" Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.979005 4832 generic.go:334] "Generic (PLEG): container finished" podID="a0899c70-4625-43e3-985d-7c80da1c8f6c" containerID="957c8caad340bb480d22092e567c2573fb488752d244121079dbf42374b81348" exitCode=0 Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.979056 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wchsb" event={"ID":"a0899c70-4625-43e3-985d-7c80da1c8f6c","Type":"ContainerDied","Data":"957c8caad340bb480d22092e567c2573fb488752d244121079dbf42374b81348"} Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.979084 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wchsb" event={"ID":"a0899c70-4625-43e3-985d-7c80da1c8f6c","Type":"ContainerDied","Data":"684a907c2ef663d8edbc820a9cd3abefedfb2e4b41c4522bb03b65a8b4183aff"} Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.979106 4832 scope.go:117] "RemoveContainer" containerID="957c8caad340bb480d22092e567c2573fb488752d244121079dbf42374b81348" Oct 02 18:57:31 crc kubenswrapper[4832]: I1002 18:57:31.979256 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wchsb" Oct 02 18:57:32 crc kubenswrapper[4832]: I1002 18:57:32.017362 4832 scope.go:117] "RemoveContainer" containerID="d3d2c3110470eff21da6af4cd9291dc6fe63e4178f245b1f11bc80a71e6277ac" Oct 02 18:57:32 crc kubenswrapper[4832]: I1002 18:57:32.033602 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wchsb"] Oct 02 18:57:32 crc kubenswrapper[4832]: I1002 18:57:32.045570 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wchsb"] Oct 02 18:57:32 crc kubenswrapper[4832]: I1002 18:57:32.051908 4832 scope.go:117] "RemoveContainer" containerID="6d2fc5c8fcc3f7e21609603307f9b59c8f41d613ab40aea0bcb745ba96eda0b9" Oct 02 18:57:32 crc kubenswrapper[4832]: I1002 18:57:32.068989 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-lsb6q"] Oct 02 18:57:32 crc kubenswrapper[4832]: I1002 18:57:32.078554 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-lsb6q"] Oct 02 18:57:32 crc kubenswrapper[4832]: I1002 18:57:32.117991 4832 scope.go:117] "RemoveContainer" containerID="957c8caad340bb480d22092e567c2573fb488752d244121079dbf42374b81348" Oct 02 18:57:32 crc kubenswrapper[4832]: E1002 18:57:32.118777 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"957c8caad340bb480d22092e567c2573fb488752d244121079dbf42374b81348\": container with ID starting with 957c8caad340bb480d22092e567c2573fb488752d244121079dbf42374b81348 not found: ID does not exist" containerID="957c8caad340bb480d22092e567c2573fb488752d244121079dbf42374b81348" Oct 02 18:57:32 crc kubenswrapper[4832]: I1002 18:57:32.118820 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"957c8caad340bb480d22092e567c2573fb488752d244121079dbf42374b81348"} err="failed to get container status \"957c8caad340bb480d22092e567c2573fb488752d244121079dbf42374b81348\": rpc error: code = NotFound desc = could not find container \"957c8caad340bb480d22092e567c2573fb488752d244121079dbf42374b81348\": container with ID starting with 957c8caad340bb480d22092e567c2573fb488752d244121079dbf42374b81348 not found: ID does not exist" Oct 02 18:57:32 crc kubenswrapper[4832]: I1002 18:57:32.118842 4832 scope.go:117] "RemoveContainer" containerID="d3d2c3110470eff21da6af4cd9291dc6fe63e4178f245b1f11bc80a71e6277ac" Oct 02 18:57:32 crc kubenswrapper[4832]: E1002 18:57:32.119223 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3d2c3110470eff21da6af4cd9291dc6fe63e4178f245b1f11bc80a71e6277ac\": container with ID starting with d3d2c3110470eff21da6af4cd9291dc6fe63e4178f245b1f11bc80a71e6277ac not found: ID does not exist" containerID="d3d2c3110470eff21da6af4cd9291dc6fe63e4178f245b1f11bc80a71e6277ac" Oct 02 18:57:32 crc kubenswrapper[4832]: I1002 18:57:32.119255 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3d2c3110470eff21da6af4cd9291dc6fe63e4178f245b1f11bc80a71e6277ac"} err="failed to get container status \"d3d2c3110470eff21da6af4cd9291dc6fe63e4178f245b1f11bc80a71e6277ac\": rpc error: code = NotFound desc = could not find container \"d3d2c3110470eff21da6af4cd9291dc6fe63e4178f245b1f11bc80a71e6277ac\": container with ID starting with d3d2c3110470eff21da6af4cd9291dc6fe63e4178f245b1f11bc80a71e6277ac not found: ID does not exist" Oct 02 18:57:32 crc kubenswrapper[4832]: I1002 18:57:32.119292 4832 scope.go:117] "RemoveContainer" containerID="6d2fc5c8fcc3f7e21609603307f9b59c8f41d613ab40aea0bcb745ba96eda0b9" Oct 02 18:57:32 crc kubenswrapper[4832]: E1002 18:57:32.119582 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d2fc5c8fcc3f7e21609603307f9b59c8f41d613ab40aea0bcb745ba96eda0b9\": container with ID starting with 6d2fc5c8fcc3f7e21609603307f9b59c8f41d613ab40aea0bcb745ba96eda0b9 not found: ID does not exist" containerID="6d2fc5c8fcc3f7e21609603307f9b59c8f41d613ab40aea0bcb745ba96eda0b9" Oct 02 18:57:32 crc kubenswrapper[4832]: I1002 18:57:32.119667 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d2fc5c8fcc3f7e21609603307f9b59c8f41d613ab40aea0bcb745ba96eda0b9"} err="failed to get container status \"6d2fc5c8fcc3f7e21609603307f9b59c8f41d613ab40aea0bcb745ba96eda0b9\": rpc error: code = NotFound desc = could not find container \"6d2fc5c8fcc3f7e21609603307f9b59c8f41d613ab40aea0bcb745ba96eda0b9\": container with ID starting with 6d2fc5c8fcc3f7e21609603307f9b59c8f41d613ab40aea0bcb745ba96eda0b9 not found: ID does not exist" Oct 02 18:57:33 crc kubenswrapper[4832]: I1002 18:57:33.246507 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36e2b7cf-ba0c-4217-93e4-503ce1e40755" path="/var/lib/kubelet/pods/36e2b7cf-ba0c-4217-93e4-503ce1e40755/volumes" Oct 02 18:57:33 crc kubenswrapper[4832]: I1002 18:57:33.247665 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0899c70-4625-43e3-985d-7c80da1c8f6c" path="/var/lib/kubelet/pods/a0899c70-4625-43e3-985d-7c80da1c8f6c/volumes" Oct 02 18:58:01 crc kubenswrapper[4832]: I1002 18:58:01.337024 4832 generic.go:334] "Generic (PLEG): container finished" podID="8b8c6e59-47c8-4051-a398-3f3d6739d15d" containerID="a177d6793cc7ba1b5b2912675750505532ef2ee9f50a53ab01af33d7fa6bd43f" exitCode=0 Oct 02 18:58:01 crc kubenswrapper[4832]: I1002 18:58:01.337147 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" event={"ID":"8b8c6e59-47c8-4051-a398-3f3d6739d15d","Type":"ContainerDied","Data":"a177d6793cc7ba1b5b2912675750505532ef2ee9f50a53ab01af33d7fa6bd43f"} Oct 02 18:58:02 crc kubenswrapper[4832]: I1002 18:58:02.938926 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.108374 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b8c6e59-47c8-4051-a398-3f3d6739d15d-ssh-key\") pod \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\" (UID: \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\") " Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.108473 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfw5n\" (UniqueName: \"kubernetes.io/projected/8b8c6e59-47c8-4051-a398-3f3d6739d15d-kube-api-access-sfw5n\") pod \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\" (UID: \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\") " Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.108752 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b8c6e59-47c8-4051-a398-3f3d6739d15d-inventory\") pod \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\" (UID: \"8b8c6e59-47c8-4051-a398-3f3d6739d15d\") " Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.116067 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8c6e59-47c8-4051-a398-3f3d6739d15d-kube-api-access-sfw5n" (OuterVolumeSpecName: "kube-api-access-sfw5n") pod "8b8c6e59-47c8-4051-a398-3f3d6739d15d" (UID: "8b8c6e59-47c8-4051-a398-3f3d6739d15d"). InnerVolumeSpecName "kube-api-access-sfw5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.142194 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8c6e59-47c8-4051-a398-3f3d6739d15d-inventory" (OuterVolumeSpecName: "inventory") pod "8b8c6e59-47c8-4051-a398-3f3d6739d15d" (UID: "8b8c6e59-47c8-4051-a398-3f3d6739d15d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.151880 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8c6e59-47c8-4051-a398-3f3d6739d15d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8b8c6e59-47c8-4051-a398-3f3d6739d15d" (UID: "8b8c6e59-47c8-4051-a398-3f3d6739d15d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.213449 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8b8c6e59-47c8-4051-a398-3f3d6739d15d-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.213491 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfw5n\" (UniqueName: \"kubernetes.io/projected/8b8c6e59-47c8-4051-a398-3f3d6739d15d-kube-api-access-sfw5n\") on node \"crc\" DevicePath \"\"" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.213508 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b8c6e59-47c8-4051-a398-3f3d6739d15d-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.358766 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" event={"ID":"8b8c6e59-47c8-4051-a398-3f3d6739d15d","Type":"ContainerDied","Data":"4a1a7cd1ab29898a8b8f7ef757cae8ac401f737a89c1f71f28934c82f33f68c0"} Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.358807 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a1a7cd1ab29898a8b8f7ef757cae8ac401f737a89c1f71f28934c82f33f68c0" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.358858 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-xtftr" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.460392 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-d4vj4"] Oct 02 18:58:03 crc kubenswrapper[4832]: E1002 18:58:03.461189 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0899c70-4625-43e3-985d-7c80da1c8f6c" containerName="extract-content" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.461358 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0899c70-4625-43e3-985d-7c80da1c8f6c" containerName="extract-content" Oct 02 18:58:03 crc kubenswrapper[4832]: E1002 18:58:03.461475 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0899c70-4625-43e3-985d-7c80da1c8f6c" containerName="registry-server" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.461555 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0899c70-4625-43e3-985d-7c80da1c8f6c" containerName="registry-server" Oct 02 18:58:03 crc kubenswrapper[4832]: E1002 18:58:03.461648 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8c6e59-47c8-4051-a398-3f3d6739d15d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.461738 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8c6e59-47c8-4051-a398-3f3d6739d15d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:58:03 crc kubenswrapper[4832]: E1002 18:58:03.461840 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0899c70-4625-43e3-985d-7c80da1c8f6c" containerName="extract-utilities" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.461912 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0899c70-4625-43e3-985d-7c80da1c8f6c" containerName="extract-utilities" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.462290 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0899c70-4625-43e3-985d-7c80da1c8f6c" containerName="registry-server" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.462416 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8c6e59-47c8-4051-a398-3f3d6739d15d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.463747 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.468124 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.471157 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.471324 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.472587 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-d4vj4"] Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.474234 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.520040 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03a71b8f-0cad-40ab-8092-51c6e380b13d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-d4vj4\" (UID: \"03a71b8f-0cad-40ab-8092-51c6e380b13d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.520148 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/03a71b8f-0cad-40ab-8092-51c6e380b13d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-d4vj4\" (UID: \"03a71b8f-0cad-40ab-8092-51c6e380b13d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.520174 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56hkk\" (UniqueName: \"kubernetes.io/projected/03a71b8f-0cad-40ab-8092-51c6e380b13d-kube-api-access-56hkk\") pod \"ssh-known-hosts-edpm-deployment-d4vj4\" (UID: \"03a71b8f-0cad-40ab-8092-51c6e380b13d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.622960 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03a71b8f-0cad-40ab-8092-51c6e380b13d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-d4vj4\" (UID: \"03a71b8f-0cad-40ab-8092-51c6e380b13d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.623450 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/03a71b8f-0cad-40ab-8092-51c6e380b13d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-d4vj4\" (UID: \"03a71b8f-0cad-40ab-8092-51c6e380b13d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.623509 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56hkk\" (UniqueName: \"kubernetes.io/projected/03a71b8f-0cad-40ab-8092-51c6e380b13d-kube-api-access-56hkk\") pod \"ssh-known-hosts-edpm-deployment-d4vj4\" (UID: \"03a71b8f-0cad-40ab-8092-51c6e380b13d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.629216 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/03a71b8f-0cad-40ab-8092-51c6e380b13d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-d4vj4\" (UID: \"03a71b8f-0cad-40ab-8092-51c6e380b13d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.633662 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03a71b8f-0cad-40ab-8092-51c6e380b13d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-d4vj4\" (UID: \"03a71b8f-0cad-40ab-8092-51c6e380b13d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.641455 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56hkk\" (UniqueName: \"kubernetes.io/projected/03a71b8f-0cad-40ab-8092-51c6e380b13d-kube-api-access-56hkk\") pod \"ssh-known-hosts-edpm-deployment-d4vj4\" (UID: \"03a71b8f-0cad-40ab-8092-51c6e380b13d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" Oct 02 18:58:03 crc kubenswrapper[4832]: I1002 18:58:03.790289 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" Oct 02 18:58:04 crc kubenswrapper[4832]: I1002 18:58:04.165456 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 18:58:04 crc kubenswrapper[4832]: I1002 18:58:04.169745 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-d4vj4"] Oct 02 18:58:04 crc kubenswrapper[4832]: I1002 18:58:04.369594 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" event={"ID":"03a71b8f-0cad-40ab-8092-51c6e380b13d","Type":"ContainerStarted","Data":"cb18d8dac2df614ccb4ef480a4053a52e0c836d24cce8c7d2df6e176a2833796"} Oct 02 18:58:05 crc kubenswrapper[4832]: I1002 18:58:05.385513 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" event={"ID":"03a71b8f-0cad-40ab-8092-51c6e380b13d","Type":"ContainerStarted","Data":"28409dabef888528f67ff4a3071edb34a14cdf101aea94652fc4fb2e0ac82a5f"} Oct 02 18:58:05 crc kubenswrapper[4832]: I1002 18:58:05.399865 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" podStartSLOduration=1.654172421 podStartE2EDuration="2.399850696s" podCreationTimestamp="2025-10-02 18:58:03 +0000 UTC" firstStartedPulling="2025-10-02 18:58:04.165192697 +0000 UTC m=+2241.134635569" lastFinishedPulling="2025-10-02 18:58:04.910870972 +0000 UTC m=+2241.880313844" observedRunningTime="2025-10-02 18:58:05.398585346 +0000 UTC m=+2242.368028228" watchObservedRunningTime="2025-10-02 18:58:05.399850696 +0000 UTC m=+2242.369293568" Oct 02 18:58:12 crc kubenswrapper[4832]: E1002 18:58:12.488143 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03a71b8f_0cad_40ab_8092_51c6e380b13d.slice/crio-28409dabef888528f67ff4a3071edb34a14cdf101aea94652fc4fb2e0ac82a5f.scope\": RecentStats: unable to find data in memory cache]" Oct 02 18:58:13 crc kubenswrapper[4832]: I1002 18:58:13.479166 4832 generic.go:334] "Generic (PLEG): container finished" podID="03a71b8f-0cad-40ab-8092-51c6e380b13d" containerID="28409dabef888528f67ff4a3071edb34a14cdf101aea94652fc4fb2e0ac82a5f" exitCode=0 Oct 02 18:58:13 crc kubenswrapper[4832]: I1002 18:58:13.479210 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" event={"ID":"03a71b8f-0cad-40ab-8092-51c6e380b13d","Type":"ContainerDied","Data":"28409dabef888528f67ff4a3071edb34a14cdf101aea94652fc4fb2e0ac82a5f"} Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.016602 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.113062 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56hkk\" (UniqueName: \"kubernetes.io/projected/03a71b8f-0cad-40ab-8092-51c6e380b13d-kube-api-access-56hkk\") pod \"03a71b8f-0cad-40ab-8092-51c6e380b13d\" (UID: \"03a71b8f-0cad-40ab-8092-51c6e380b13d\") " Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.113503 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/03a71b8f-0cad-40ab-8092-51c6e380b13d-inventory-0\") pod \"03a71b8f-0cad-40ab-8092-51c6e380b13d\" (UID: \"03a71b8f-0cad-40ab-8092-51c6e380b13d\") " Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.113577 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03a71b8f-0cad-40ab-8092-51c6e380b13d-ssh-key-openstack-edpm-ipam\") pod \"03a71b8f-0cad-40ab-8092-51c6e380b13d\" (UID: \"03a71b8f-0cad-40ab-8092-51c6e380b13d\") " Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.123584 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a71b8f-0cad-40ab-8092-51c6e380b13d-kube-api-access-56hkk" (OuterVolumeSpecName: "kube-api-access-56hkk") pod "03a71b8f-0cad-40ab-8092-51c6e380b13d" (UID: "03a71b8f-0cad-40ab-8092-51c6e380b13d"). InnerVolumeSpecName "kube-api-access-56hkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.155586 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a71b8f-0cad-40ab-8092-51c6e380b13d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "03a71b8f-0cad-40ab-8092-51c6e380b13d" (UID: "03a71b8f-0cad-40ab-8092-51c6e380b13d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.158585 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a71b8f-0cad-40ab-8092-51c6e380b13d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "03a71b8f-0cad-40ab-8092-51c6e380b13d" (UID: "03a71b8f-0cad-40ab-8092-51c6e380b13d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.216023 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56hkk\" (UniqueName: \"kubernetes.io/projected/03a71b8f-0cad-40ab-8092-51c6e380b13d-kube-api-access-56hkk\") on node \"crc\" DevicePath \"\"" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.216058 4832 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/03a71b8f-0cad-40ab-8092-51c6e380b13d-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.216076 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03a71b8f-0cad-40ab-8092-51c6e380b13d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.502233 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" event={"ID":"03a71b8f-0cad-40ab-8092-51c6e380b13d","Type":"ContainerDied","Data":"cb18d8dac2df614ccb4ef480a4053a52e0c836d24cce8c7d2df6e176a2833796"} Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.502291 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb18d8dac2df614ccb4ef480a4053a52e0c836d24cce8c7d2df6e176a2833796" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.502342 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-d4vj4" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.583971 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc"] Oct 02 18:58:15 crc kubenswrapper[4832]: E1002 18:58:15.584507 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a71b8f-0cad-40ab-8092-51c6e380b13d" containerName="ssh-known-hosts-edpm-deployment" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.584525 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a71b8f-0cad-40ab-8092-51c6e380b13d" containerName="ssh-known-hosts-edpm-deployment" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.584743 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a71b8f-0cad-40ab-8092-51c6e380b13d" containerName="ssh-known-hosts-edpm-deployment" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.587593 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.590694 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.594760 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.594966 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.595538 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.607762 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc"] Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.730403 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5518272-a1ba-495e-8634-43ce4c08d705-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6jxlc\" (UID: \"d5518272-a1ba-495e-8634-43ce4c08d705\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.730889 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5518272-a1ba-495e-8634-43ce4c08d705-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6jxlc\" (UID: \"d5518272-a1ba-495e-8634-43ce4c08d705\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.731102 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twngf\" (UniqueName: \"kubernetes.io/projected/d5518272-a1ba-495e-8634-43ce4c08d705-kube-api-access-twngf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6jxlc\" (UID: \"d5518272-a1ba-495e-8634-43ce4c08d705\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.833861 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5518272-a1ba-495e-8634-43ce4c08d705-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6jxlc\" (UID: \"d5518272-a1ba-495e-8634-43ce4c08d705\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.833949 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twngf\" (UniqueName: \"kubernetes.io/projected/d5518272-a1ba-495e-8634-43ce4c08d705-kube-api-access-twngf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6jxlc\" (UID: \"d5518272-a1ba-495e-8634-43ce4c08d705\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.834068 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5518272-a1ba-495e-8634-43ce4c08d705-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6jxlc\" (UID: \"d5518272-a1ba-495e-8634-43ce4c08d705\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.837714 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5518272-a1ba-495e-8634-43ce4c08d705-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6jxlc\" (UID: \"d5518272-a1ba-495e-8634-43ce4c08d705\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.837927 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5518272-a1ba-495e-8634-43ce4c08d705-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6jxlc\" (UID: \"d5518272-a1ba-495e-8634-43ce4c08d705\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.853966 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twngf\" (UniqueName: \"kubernetes.io/projected/d5518272-a1ba-495e-8634-43ce4c08d705-kube-api-access-twngf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-6jxlc\" (UID: \"d5518272-a1ba-495e-8634-43ce4c08d705\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" Oct 02 18:58:15 crc kubenswrapper[4832]: I1002 18:58:15.905762 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" Oct 02 18:58:16 crc kubenswrapper[4832]: I1002 18:58:16.053542 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-pjwt8"] Oct 02 18:58:16 crc kubenswrapper[4832]: I1002 18:58:16.069033 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-pjwt8"] Oct 02 18:58:16 crc kubenswrapper[4832]: I1002 18:58:16.472024 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc"] Oct 02 18:58:16 crc kubenswrapper[4832]: I1002 18:58:16.516666 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" event={"ID":"d5518272-a1ba-495e-8634-43ce4c08d705","Type":"ContainerStarted","Data":"ac899cf7913a1388222b9e5cdad78595de84fa19b5d13031f2269fd75cbf25fc"} Oct 02 18:58:17 crc kubenswrapper[4832]: I1002 18:58:17.244380 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6150c26f-2bc7-4e66-84ef-b7241196ee1f" path="/var/lib/kubelet/pods/6150c26f-2bc7-4e66-84ef-b7241196ee1f/volumes" Oct 02 18:58:17 crc kubenswrapper[4832]: I1002 18:58:17.526683 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" event={"ID":"d5518272-a1ba-495e-8634-43ce4c08d705","Type":"ContainerStarted","Data":"927a032bf731658eb89d9c4d31a88077584b496543825d2fe55a047bfeb644ec"} Oct 02 18:58:17 crc kubenswrapper[4832]: I1002 18:58:17.547757 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" podStartSLOduration=1.933192829 podStartE2EDuration="2.547731216s" podCreationTimestamp="2025-10-02 18:58:15 +0000 UTC" firstStartedPulling="2025-10-02 18:58:16.476656234 +0000 UTC m=+2253.446099116" lastFinishedPulling="2025-10-02 18:58:17.091194641 +0000 UTC m=+2254.060637503" observedRunningTime="2025-10-02 18:58:17.543843225 +0000 UTC m=+2254.513286107" watchObservedRunningTime="2025-10-02 18:58:17.547731216 +0000 UTC m=+2254.517174088" Oct 02 18:58:18 crc kubenswrapper[4832]: I1002 18:58:18.784866 4832 scope.go:117] "RemoveContainer" containerID="72df4d7bd201709fde9f768fa700f6f8c64587c226c3c8c5fce6c3ffbb767de2" Oct 02 18:58:18 crc kubenswrapper[4832]: I1002 18:58:18.924490 4832 scope.go:117] "RemoveContainer" containerID="67ffdcd0a2bbf716a6e5e85c8f598529c70b3988728dba06b2606d6b51005967" Oct 02 18:58:26 crc kubenswrapper[4832]: I1002 18:58:26.643168 4832 generic.go:334] "Generic (PLEG): container finished" podID="d5518272-a1ba-495e-8634-43ce4c08d705" containerID="927a032bf731658eb89d9c4d31a88077584b496543825d2fe55a047bfeb644ec" exitCode=0 Oct 02 18:58:26 crc kubenswrapper[4832]: I1002 18:58:26.643319 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" event={"ID":"d5518272-a1ba-495e-8634-43ce4c08d705","Type":"ContainerDied","Data":"927a032bf731658eb89d9c4d31a88077584b496543825d2fe55a047bfeb644ec"} Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.134469 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.290481 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twngf\" (UniqueName: \"kubernetes.io/projected/d5518272-a1ba-495e-8634-43ce4c08d705-kube-api-access-twngf\") pod \"d5518272-a1ba-495e-8634-43ce4c08d705\" (UID: \"d5518272-a1ba-495e-8634-43ce4c08d705\") " Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.290650 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5518272-a1ba-495e-8634-43ce4c08d705-ssh-key\") pod \"d5518272-a1ba-495e-8634-43ce4c08d705\" (UID: \"d5518272-a1ba-495e-8634-43ce4c08d705\") " Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.290711 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5518272-a1ba-495e-8634-43ce4c08d705-inventory\") pod \"d5518272-a1ba-495e-8634-43ce4c08d705\" (UID: \"d5518272-a1ba-495e-8634-43ce4c08d705\") " Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.298814 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5518272-a1ba-495e-8634-43ce4c08d705-kube-api-access-twngf" (OuterVolumeSpecName: "kube-api-access-twngf") pod "d5518272-a1ba-495e-8634-43ce4c08d705" (UID: "d5518272-a1ba-495e-8634-43ce4c08d705"). InnerVolumeSpecName "kube-api-access-twngf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.331773 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5518272-a1ba-495e-8634-43ce4c08d705-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d5518272-a1ba-495e-8634-43ce4c08d705" (UID: "d5518272-a1ba-495e-8634-43ce4c08d705"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.347189 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5518272-a1ba-495e-8634-43ce4c08d705-inventory" (OuterVolumeSpecName: "inventory") pod "d5518272-a1ba-495e-8634-43ce4c08d705" (UID: "d5518272-a1ba-495e-8634-43ce4c08d705"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.394131 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twngf\" (UniqueName: \"kubernetes.io/projected/d5518272-a1ba-495e-8634-43ce4c08d705-kube-api-access-twngf\") on node \"crc\" DevicePath \"\"" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.394287 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5518272-a1ba-495e-8634-43ce4c08d705-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.394320 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d5518272-a1ba-495e-8634-43ce4c08d705-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.671977 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" event={"ID":"d5518272-a1ba-495e-8634-43ce4c08d705","Type":"ContainerDied","Data":"ac899cf7913a1388222b9e5cdad78595de84fa19b5d13031f2269fd75cbf25fc"} Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.672040 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac899cf7913a1388222b9e5cdad78595de84fa19b5d13031f2269fd75cbf25fc" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.672102 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-6jxlc" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.753541 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh"] Oct 02 18:58:28 crc kubenswrapper[4832]: E1002 18:58:28.754184 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5518272-a1ba-495e-8634-43ce4c08d705" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.754208 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5518272-a1ba-495e-8634-43ce4c08d705" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.754571 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5518272-a1ba-495e-8634-43ce4c08d705" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.755660 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.758615 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.758816 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.758979 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.766888 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.768830 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh"] Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.907801 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh\" (UID: \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.908652 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hpk4\" (UniqueName: \"kubernetes.io/projected/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-kube-api-access-2hpk4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh\" (UID: \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" Oct 02 18:58:28 crc kubenswrapper[4832]: I1002 18:58:28.908844 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh\" (UID: \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" Oct 02 18:58:29 crc kubenswrapper[4832]: I1002 18:58:29.011428 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh\" (UID: \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" Oct 02 18:58:29 crc kubenswrapper[4832]: I1002 18:58:29.011639 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hpk4\" (UniqueName: \"kubernetes.io/projected/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-kube-api-access-2hpk4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh\" (UID: \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" Oct 02 18:58:29 crc kubenswrapper[4832]: I1002 18:58:29.011692 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh\" (UID: \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" Oct 02 18:58:29 crc kubenswrapper[4832]: I1002 18:58:29.015807 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh\" (UID: \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" Oct 02 18:58:29 crc kubenswrapper[4832]: I1002 18:58:29.015836 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh\" (UID: \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" Oct 02 18:58:29 crc kubenswrapper[4832]: I1002 18:58:29.033778 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hpk4\" (UniqueName: \"kubernetes.io/projected/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-kube-api-access-2hpk4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh\" (UID: \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" Oct 02 18:58:29 crc kubenswrapper[4832]: I1002 18:58:29.099955 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" Oct 02 18:58:29 crc kubenswrapper[4832]: I1002 18:58:29.723580 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh"] Oct 02 18:58:30 crc kubenswrapper[4832]: I1002 18:58:30.705656 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" event={"ID":"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d","Type":"ContainerStarted","Data":"33a1e3d1612a17d6e3ed66fb3be97e5c2da035c4376d7d83158075de65d173da"} Oct 02 18:58:31 crc kubenswrapper[4832]: I1002 18:58:31.719832 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" event={"ID":"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d","Type":"ContainerStarted","Data":"2e49ed0d2ef3d8af92749e132a186afee2f58471d635ac2017941ce6b42b6ecd"} Oct 02 18:58:31 crc kubenswrapper[4832]: I1002 18:58:31.752795 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" podStartSLOduration=2.9708457360000002 podStartE2EDuration="3.752769957s" podCreationTimestamp="2025-10-02 18:58:28 +0000 UTC" firstStartedPulling="2025-10-02 18:58:29.75845618 +0000 UTC m=+2266.727899092" lastFinishedPulling="2025-10-02 18:58:30.540380431 +0000 UTC m=+2267.509823313" observedRunningTime="2025-10-02 18:58:31.74420359 +0000 UTC m=+2268.713646502" watchObservedRunningTime="2025-10-02 18:58:31.752769957 +0000 UTC m=+2268.722212859" Oct 02 18:58:41 crc kubenswrapper[4832]: I1002 18:58:41.845541 4832 generic.go:334] "Generic (PLEG): container finished" podID="c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d" containerID="2e49ed0d2ef3d8af92749e132a186afee2f58471d635ac2017941ce6b42b6ecd" exitCode=0 Oct 02 18:58:41 crc kubenswrapper[4832]: I1002 18:58:41.845632 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" event={"ID":"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d","Type":"ContainerDied","Data":"2e49ed0d2ef3d8af92749e132a186afee2f58471d635ac2017941ce6b42b6ecd"} Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.502242 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.616502 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-inventory\") pod \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\" (UID: \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\") " Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.616735 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-ssh-key\") pod \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\" (UID: \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\") " Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.616827 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hpk4\" (UniqueName: \"kubernetes.io/projected/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-kube-api-access-2hpk4\") pod \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\" (UID: \"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d\") " Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.622405 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-kube-api-access-2hpk4" (OuterVolumeSpecName: "kube-api-access-2hpk4") pod "c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d" (UID: "c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d"). InnerVolumeSpecName "kube-api-access-2hpk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.648950 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d" (UID: "c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.664396 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-inventory" (OuterVolumeSpecName: "inventory") pod "c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d" (UID: "c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.721962 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.722015 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.722036 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hpk4\" (UniqueName: \"kubernetes.io/projected/c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d-kube-api-access-2hpk4\") on node \"crc\" DevicePath \"\"" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.878013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" event={"ID":"c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d","Type":"ContainerDied","Data":"33a1e3d1612a17d6e3ed66fb3be97e5c2da035c4376d7d83158075de65d173da"} Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.878059 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33a1e3d1612a17d6e3ed66fb3be97e5c2da035c4376d7d83158075de65d173da" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.878116 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.975714 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t"] Oct 02 18:58:43 crc kubenswrapper[4832]: E1002 18:58:43.976283 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.976306 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.976629 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.977688 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.985730 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.985768 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.986294 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.986317 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.985878 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.986182 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.991647 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.992170 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Oct 02 18:58:43 crc kubenswrapper[4832]: I1002 18:58:43.992524 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.023486 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t"] Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.131334 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.131388 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.131419 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.131441 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.131471 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.131548 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.131603 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.131737 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.131790 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.131957 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2rf4\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-kube-api-access-v2rf4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.132121 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.132193 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.132397 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.132477 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.132555 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.132684 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235028 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235087 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235124 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235149 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235191 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235246 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235320 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235346 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235375 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235430 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2rf4\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-kube-api-access-v2rf4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235476 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235502 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235585 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235618 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235654 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.235714 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.240957 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.241715 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.242356 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.242793 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.243886 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.244007 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.244329 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.244590 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.244925 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.245296 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.247404 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.247767 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.248904 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.249344 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.254092 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.261201 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2rf4\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-kube-api-access-v2rf4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.299010 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:58:44 crc kubenswrapper[4832]: I1002 18:58:44.917649 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t"] Oct 02 18:58:44 crc kubenswrapper[4832]: W1002 18:58:44.919683 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61f0ae54_7250_4cc8_9b15_10d1be6c5d31.slice/crio-067f35e6e3fbedff89b335133052a15c9acaff2191f3e420e411c5a2bdde1400 WatchSource:0}: Error finding container 067f35e6e3fbedff89b335133052a15c9acaff2191f3e420e411c5a2bdde1400: Status 404 returned error can't find the container with id 067f35e6e3fbedff89b335133052a15c9acaff2191f3e420e411c5a2bdde1400 Oct 02 18:58:45 crc kubenswrapper[4832]: I1002 18:58:45.442885 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:58:45 crc kubenswrapper[4832]: I1002 18:58:45.904335 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" event={"ID":"61f0ae54-7250-4cc8-9b15-10d1be6c5d31","Type":"ContainerStarted","Data":"fd60cae0769152574c0660c0ba9037d9c7d488135e9c664f4d7c400b36c65827"} Oct 02 18:58:45 crc kubenswrapper[4832]: I1002 18:58:45.904419 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" event={"ID":"61f0ae54-7250-4cc8-9b15-10d1be6c5d31","Type":"ContainerStarted","Data":"067f35e6e3fbedff89b335133052a15c9acaff2191f3e420e411c5a2bdde1400"} Oct 02 18:58:46 crc kubenswrapper[4832]: I1002 18:58:46.945443 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" podStartSLOduration=3.429745851 podStartE2EDuration="3.945420815s" podCreationTimestamp="2025-10-02 18:58:43 +0000 UTC" firstStartedPulling="2025-10-02 18:58:44.924050337 +0000 UTC m=+2281.893493209" lastFinishedPulling="2025-10-02 18:58:45.439725301 +0000 UTC m=+2282.409168173" observedRunningTime="2025-10-02 18:58:46.944853798 +0000 UTC m=+2283.914296690" watchObservedRunningTime="2025-10-02 18:58:46.945420815 +0000 UTC m=+2283.914863697" Oct 02 18:59:37 crc kubenswrapper[4832]: I1002 18:59:37.577133 4832 generic.go:334] "Generic (PLEG): container finished" podID="61f0ae54-7250-4cc8-9b15-10d1be6c5d31" containerID="fd60cae0769152574c0660c0ba9037d9c7d488135e9c664f4d7c400b36c65827" exitCode=0 Oct 02 18:59:37 crc kubenswrapper[4832]: I1002 18:59:37.577340 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" event={"ID":"61f0ae54-7250-4cc8-9b15-10d1be6c5d31","Type":"ContainerDied","Data":"fd60cae0769152574c0660c0ba9037d9c7d488135e9c664f4d7c400b36c65827"} Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.138165 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.270866 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.270928 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-ovn-default-certs-0\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.270960 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-telemetry-power-monitoring-combined-ca-bundle\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.270993 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-bootstrap-combined-ca-bundle\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.271046 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-telemetry-combined-ca-bundle\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.271092 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-nova-combined-ca-bundle\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.271171 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-ovn-combined-ca-bundle\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.271294 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.271412 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.271632 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2rf4\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-kube-api-access-v2rf4\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.271671 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-libvirt-combined-ca-bundle\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.271712 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-inventory\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.271762 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-neutron-metadata-combined-ca-bundle\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.271793 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-ssh-key\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.271918 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.271997 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-repo-setup-combined-ca-bundle\") pod \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\" (UID: \"61f0ae54-7250-4cc8-9b15-10d1be6c5d31\") " Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.278099 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.278994 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.280297 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.281209 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.281374 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.281424 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.282237 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.281460 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.283475 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.283696 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-kube-api-access-v2rf4" (OuterVolumeSpecName: "kube-api-access-v2rf4") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "kube-api-access-v2rf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.284127 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.284735 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.286293 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.286865 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.314517 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-inventory" (OuterVolumeSpecName: "inventory") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.317422 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "61f0ae54-7250-4cc8-9b15-10d1be6c5d31" (UID: "61f0ae54-7250-4cc8-9b15-10d1be6c5d31"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374602 4832 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374636 4832 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374646 4832 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374655 4832 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374664 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374672 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374681 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374692 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2rf4\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-kube-api-access-v2rf4\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374700 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374708 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374716 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374724 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374732 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374742 4832 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374751 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.374760 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/61f0ae54-7250-4cc8-9b15-10d1be6c5d31-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.604134 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" event={"ID":"61f0ae54-7250-4cc8-9b15-10d1be6c5d31","Type":"ContainerDied","Data":"067f35e6e3fbedff89b335133052a15c9acaff2191f3e420e411c5a2bdde1400"} Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.604191 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.604197 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="067f35e6e3fbedff89b335133052a15c9acaff2191f3e420e411c5a2bdde1400" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.729816 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh"] Oct 02 18:59:39 crc kubenswrapper[4832]: E1002 18:59:39.730341 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f0ae54-7250-4cc8-9b15-10d1be6c5d31" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.730358 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f0ae54-7250-4cc8-9b15-10d1be6c5d31" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.730613 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f0ae54-7250-4cc8-9b15-10d1be6c5d31" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.731470 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.735048 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.735048 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.735052 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.735969 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.736250 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.744202 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh"] Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.886750 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.886812 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.886923 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcqz5\" (UniqueName: \"kubernetes.io/projected/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-kube-api-access-rcqz5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.886942 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.886973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.988873 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.989122 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.989173 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.989267 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcqz5\" (UniqueName: \"kubernetes.io/projected/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-kube-api-access-rcqz5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.989310 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.990514 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.992763 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:39 crc kubenswrapper[4832]: I1002 18:59:39.993337 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:40 crc kubenswrapper[4832]: I1002 18:59:40.003050 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:40 crc kubenswrapper[4832]: I1002 18:59:40.005834 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcqz5\" (UniqueName: \"kubernetes.io/projected/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-kube-api-access-rcqz5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jwffh\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:40 crc kubenswrapper[4832]: I1002 18:59:40.098630 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 18:59:40 crc kubenswrapper[4832]: I1002 18:59:40.632508 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh"] Oct 02 18:59:41 crc kubenswrapper[4832]: I1002 18:59:41.630776 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" event={"ID":"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d","Type":"ContainerStarted","Data":"6b74a42d62b94fa99910ec969f47b6daf5e5e031ef6f5700e3eb41c0576ef4c8"} Oct 02 18:59:41 crc kubenswrapper[4832]: I1002 18:59:41.631470 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" event={"ID":"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d","Type":"ContainerStarted","Data":"7db6209810d3c858c3ff96010e1ea7816960e11a1769291f07cacfe47543ab88"} Oct 02 18:59:41 crc kubenswrapper[4832]: I1002 18:59:41.699523 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" podStartSLOduration=2.182843698 podStartE2EDuration="2.699455874s" podCreationTimestamp="2025-10-02 18:59:39 +0000 UTC" firstStartedPulling="2025-10-02 18:59:40.648131528 +0000 UTC m=+2337.617574400" lastFinishedPulling="2025-10-02 18:59:41.164743704 +0000 UTC m=+2338.134186576" observedRunningTime="2025-10-02 18:59:41.674041224 +0000 UTC m=+2338.643484126" watchObservedRunningTime="2025-10-02 18:59:41.699455874 +0000 UTC m=+2338.668898766" Oct 02 18:59:56 crc kubenswrapper[4832]: I1002 18:59:56.876196 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 18:59:56 crc kubenswrapper[4832]: I1002 18:59:56.876768 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.160667 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b"] Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.164965 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.169587 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b"] Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.181541 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.181635 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.237746 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtdjl\" (UniqueName: \"kubernetes.io/projected/f89ed500-6e46-4b64-bae2-601ca08b5174-kube-api-access-rtdjl\") pod \"collect-profiles-29323860-2t96b\" (UID: \"f89ed500-6e46-4b64-bae2-601ca08b5174\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.238286 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f89ed500-6e46-4b64-bae2-601ca08b5174-config-volume\") pod \"collect-profiles-29323860-2t96b\" (UID: \"f89ed500-6e46-4b64-bae2-601ca08b5174\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.238562 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f89ed500-6e46-4b64-bae2-601ca08b5174-secret-volume\") pod \"collect-profiles-29323860-2t96b\" (UID: \"f89ed500-6e46-4b64-bae2-601ca08b5174\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.340034 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f89ed500-6e46-4b64-bae2-601ca08b5174-config-volume\") pod \"collect-profiles-29323860-2t96b\" (UID: \"f89ed500-6e46-4b64-bae2-601ca08b5174\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.340250 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f89ed500-6e46-4b64-bae2-601ca08b5174-secret-volume\") pod \"collect-profiles-29323860-2t96b\" (UID: \"f89ed500-6e46-4b64-bae2-601ca08b5174\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.340379 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtdjl\" (UniqueName: \"kubernetes.io/projected/f89ed500-6e46-4b64-bae2-601ca08b5174-kube-api-access-rtdjl\") pod \"collect-profiles-29323860-2t96b\" (UID: \"f89ed500-6e46-4b64-bae2-601ca08b5174\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.341035 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f89ed500-6e46-4b64-bae2-601ca08b5174-config-volume\") pod \"collect-profiles-29323860-2t96b\" (UID: \"f89ed500-6e46-4b64-bae2-601ca08b5174\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.349027 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f89ed500-6e46-4b64-bae2-601ca08b5174-secret-volume\") pod \"collect-profiles-29323860-2t96b\" (UID: \"f89ed500-6e46-4b64-bae2-601ca08b5174\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.366967 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtdjl\" (UniqueName: \"kubernetes.io/projected/f89ed500-6e46-4b64-bae2-601ca08b5174-kube-api-access-rtdjl\") pod \"collect-profiles-29323860-2t96b\" (UID: \"f89ed500-6e46-4b64-bae2-601ca08b5174\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.501324 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" Oct 02 19:00:00 crc kubenswrapper[4832]: I1002 19:00:00.987007 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b"] Oct 02 19:00:01 crc kubenswrapper[4832]: I1002 19:00:01.891470 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" event={"ID":"f89ed500-6e46-4b64-bae2-601ca08b5174","Type":"ContainerStarted","Data":"9133a8d46d4ab840ebca7eb74152768347922dad93e8ea6c8263be938a272eff"} Oct 02 19:00:01 crc kubenswrapper[4832]: I1002 19:00:01.892252 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" event={"ID":"f89ed500-6e46-4b64-bae2-601ca08b5174","Type":"ContainerStarted","Data":"f9d207581a970f506643b9de84ef6325bb7b60b86a9286d37aa0e454fb6c9de9"} Oct 02 19:00:01 crc kubenswrapper[4832]: I1002 19:00:01.931979 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" podStartSLOduration=1.9319504090000001 podStartE2EDuration="1.931950409s" podCreationTimestamp="2025-10-02 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 19:00:01.914065894 +0000 UTC m=+2358.883508816" watchObservedRunningTime="2025-10-02 19:00:01.931950409 +0000 UTC m=+2358.901393321" Oct 02 19:00:02 crc kubenswrapper[4832]: I1002 19:00:02.921014 4832 generic.go:334] "Generic (PLEG): container finished" podID="f89ed500-6e46-4b64-bae2-601ca08b5174" containerID="9133a8d46d4ab840ebca7eb74152768347922dad93e8ea6c8263be938a272eff" exitCode=0 Oct 02 19:00:02 crc kubenswrapper[4832]: I1002 19:00:02.921250 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" event={"ID":"f89ed500-6e46-4b64-bae2-601ca08b5174","Type":"ContainerDied","Data":"9133a8d46d4ab840ebca7eb74152768347922dad93e8ea6c8263be938a272eff"} Oct 02 19:00:04 crc kubenswrapper[4832]: I1002 19:00:04.437597 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" Oct 02 19:00:04 crc kubenswrapper[4832]: I1002 19:00:04.546493 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f89ed500-6e46-4b64-bae2-601ca08b5174-config-volume\") pod \"f89ed500-6e46-4b64-bae2-601ca08b5174\" (UID: \"f89ed500-6e46-4b64-bae2-601ca08b5174\") " Oct 02 19:00:04 crc kubenswrapper[4832]: I1002 19:00:04.546865 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f89ed500-6e46-4b64-bae2-601ca08b5174-secret-volume\") pod \"f89ed500-6e46-4b64-bae2-601ca08b5174\" (UID: \"f89ed500-6e46-4b64-bae2-601ca08b5174\") " Oct 02 19:00:04 crc kubenswrapper[4832]: I1002 19:00:04.546982 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtdjl\" (UniqueName: \"kubernetes.io/projected/f89ed500-6e46-4b64-bae2-601ca08b5174-kube-api-access-rtdjl\") pod \"f89ed500-6e46-4b64-bae2-601ca08b5174\" (UID: \"f89ed500-6e46-4b64-bae2-601ca08b5174\") " Oct 02 19:00:04 crc kubenswrapper[4832]: I1002 19:00:04.547409 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f89ed500-6e46-4b64-bae2-601ca08b5174-config-volume" (OuterVolumeSpecName: "config-volume") pod "f89ed500-6e46-4b64-bae2-601ca08b5174" (UID: "f89ed500-6e46-4b64-bae2-601ca08b5174"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 19:00:04 crc kubenswrapper[4832]: I1002 19:00:04.548406 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f89ed500-6e46-4b64-bae2-601ca08b5174-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 19:00:04 crc kubenswrapper[4832]: I1002 19:00:04.555516 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89ed500-6e46-4b64-bae2-601ca08b5174-kube-api-access-rtdjl" (OuterVolumeSpecName: "kube-api-access-rtdjl") pod "f89ed500-6e46-4b64-bae2-601ca08b5174" (UID: "f89ed500-6e46-4b64-bae2-601ca08b5174"). InnerVolumeSpecName "kube-api-access-rtdjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:00:04 crc kubenswrapper[4832]: I1002 19:00:04.560588 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89ed500-6e46-4b64-bae2-601ca08b5174-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f89ed500-6e46-4b64-bae2-601ca08b5174" (UID: "f89ed500-6e46-4b64-bae2-601ca08b5174"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:00:04 crc kubenswrapper[4832]: I1002 19:00:04.650680 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f89ed500-6e46-4b64-bae2-601ca08b5174-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 02 19:00:04 crc kubenswrapper[4832]: I1002 19:00:04.650713 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtdjl\" (UniqueName: \"kubernetes.io/projected/f89ed500-6e46-4b64-bae2-601ca08b5174-kube-api-access-rtdjl\") on node \"crc\" DevicePath \"\"" Oct 02 19:00:04 crc kubenswrapper[4832]: I1002 19:00:04.949662 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" event={"ID":"f89ed500-6e46-4b64-bae2-601ca08b5174","Type":"ContainerDied","Data":"f9d207581a970f506643b9de84ef6325bb7b60b86a9286d37aa0e454fb6c9de9"} Oct 02 19:00:04 crc kubenswrapper[4832]: I1002 19:00:04.949960 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9d207581a970f506643b9de84ef6325bb7b60b86a9286d37aa0e454fb6c9de9" Oct 02 19:00:04 crc kubenswrapper[4832]: I1002 19:00:04.949909 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b" Oct 02 19:00:05 crc kubenswrapper[4832]: I1002 19:00:05.006626 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn"] Oct 02 19:00:05 crc kubenswrapper[4832]: I1002 19:00:05.017901 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323815-xbthn"] Oct 02 19:00:05 crc kubenswrapper[4832]: I1002 19:00:05.258865 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e291aef6-bbde-41a5-9981-96b992547e03" path="/var/lib/kubelet/pods/e291aef6-bbde-41a5-9981-96b992547e03/volumes" Oct 02 19:00:19 crc kubenswrapper[4832]: I1002 19:00:19.082426 4832 scope.go:117] "RemoveContainer" containerID="64d284757ed2c12f01897739b3e186e8a14c4692a20fe726fd8d3c98fe203c77" Oct 02 19:00:26 crc kubenswrapper[4832]: I1002 19:00:26.875986 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:00:26 crc kubenswrapper[4832]: I1002 19:00:26.876636 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:00:48 crc kubenswrapper[4832]: I1002 19:00:48.527535 4832 generic.go:334] "Generic (PLEG): container finished" podID="6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d" containerID="6b74a42d62b94fa99910ec969f47b6daf5e5e031ef6f5700e3eb41c0576ef4c8" exitCode=0 Oct 02 19:00:48 crc kubenswrapper[4832]: I1002 19:00:48.527550 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" event={"ID":"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d","Type":"ContainerDied","Data":"6b74a42d62b94fa99910ec969f47b6daf5e5e031ef6f5700e3eb41c0576ef4c8"} Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.058070 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.194142 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-inventory\") pod \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.194243 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ssh-key\") pod \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.194471 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ovncontroller-config-0\") pod \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.194714 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcqz5\" (UniqueName: \"kubernetes.io/projected/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-kube-api-access-rcqz5\") pod \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.194750 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ovn-combined-ca-bundle\") pod \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\" (UID: \"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d\") " Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.203126 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d" (UID: "6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.204615 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-kube-api-access-rcqz5" (OuterVolumeSpecName: "kube-api-access-rcqz5") pod "6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d" (UID: "6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d"). InnerVolumeSpecName "kube-api-access-rcqz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.245104 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-inventory" (OuterVolumeSpecName: "inventory") pod "6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d" (UID: "6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.252193 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d" (UID: "6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.258553 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d" (UID: "6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.298378 4832 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.298449 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcqz5\" (UniqueName: \"kubernetes.io/projected/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-kube-api-access-rcqz5\") on node \"crc\" DevicePath \"\"" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.298486 4832 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.298506 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.298549 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.557721 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" event={"ID":"6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d","Type":"ContainerDied","Data":"7db6209810d3c858c3ff96010e1ea7816960e11a1769291f07cacfe47543ab88"} Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.557797 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7db6209810d3c858c3ff96010e1ea7816960e11a1769291f07cacfe47543ab88" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.557822 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jwffh" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.703993 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw"] Oct 02 19:00:50 crc kubenswrapper[4832]: E1002 19:00:50.705031 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89ed500-6e46-4b64-bae2-601ca08b5174" containerName="collect-profiles" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.705063 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89ed500-6e46-4b64-bae2-601ca08b5174" containerName="collect-profiles" Oct 02 19:00:50 crc kubenswrapper[4832]: E1002 19:00:50.705149 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.705168 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.705563 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89ed500-6e46-4b64-bae2-601ca08b5174" containerName="collect-profiles" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.705604 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.707210 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.715864 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.717721 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.718095 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.718236 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.718362 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.718559 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.720587 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw"] Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.811247 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.811441 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.811624 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.811720 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.811793 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97kk2\" (UniqueName: \"kubernetes.io/projected/09555253-1acb-4af2-a44c-a2a5612465ff-kube-api-access-97kk2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.811872 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.913753 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.913836 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.913900 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97kk2\" (UniqueName: \"kubernetes.io/projected/09555253-1acb-4af2-a44c-a2a5612465ff-kube-api-access-97kk2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.913971 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.914078 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.914248 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.920727 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.920751 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.920777 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.921839 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.923878 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:50 crc kubenswrapper[4832]: I1002 19:00:50.931692 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97kk2\" (UniqueName: \"kubernetes.io/projected/09555253-1acb-4af2-a44c-a2a5612465ff-kube-api-access-97kk2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:51 crc kubenswrapper[4832]: I1002 19:00:51.031247 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:00:51 crc kubenswrapper[4832]: I1002 19:00:51.712050 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw"] Oct 02 19:00:52 crc kubenswrapper[4832]: I1002 19:00:52.582125 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" event={"ID":"09555253-1acb-4af2-a44c-a2a5612465ff","Type":"ContainerStarted","Data":"ec9426a7788352a56368b2428aa7ef3bce3802ed3d534c2f9cf5b75380940bca"} Oct 02 19:00:53 crc kubenswrapper[4832]: I1002 19:00:53.598391 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" event={"ID":"09555253-1acb-4af2-a44c-a2a5612465ff","Type":"ContainerStarted","Data":"f0f69da701f71b265bceb9926e1068cbccb585ca93864ae1f2f0d98b1a045099"} Oct 02 19:00:53 crc kubenswrapper[4832]: I1002 19:00:53.624740 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" podStartSLOduration=2.7149880509999997 podStartE2EDuration="3.624716433s" podCreationTimestamp="2025-10-02 19:00:50 +0000 UTC" firstStartedPulling="2025-10-02 19:00:51.704275077 +0000 UTC m=+2408.673717949" lastFinishedPulling="2025-10-02 19:00:52.614003419 +0000 UTC m=+2409.583446331" observedRunningTime="2025-10-02 19:00:53.615692632 +0000 UTC m=+2410.585135514" watchObservedRunningTime="2025-10-02 19:00:53.624716433 +0000 UTC m=+2410.594159345" Oct 02 19:00:56 crc kubenswrapper[4832]: I1002 19:00:56.876248 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:00:56 crc kubenswrapper[4832]: I1002 19:00:56.877327 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:00:56 crc kubenswrapper[4832]: I1002 19:00:56.877416 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 19:00:56 crc kubenswrapper[4832]: I1002 19:00:56.879024 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 19:00:56 crc kubenswrapper[4832]: I1002 19:00:56.879143 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" gracePeriod=600 Oct 02 19:00:57 crc kubenswrapper[4832]: E1002 19:00:57.024321 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:00:57 crc kubenswrapper[4832]: I1002 19:00:57.650235 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" exitCode=0 Oct 02 19:00:57 crc kubenswrapper[4832]: I1002 19:00:57.650296 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67"} Oct 02 19:00:57 crc kubenswrapper[4832]: I1002 19:00:57.650360 4832 scope.go:117] "RemoveContainer" containerID="bad383d75da31b854d1e8d51851deee9c385d51d3a1bd396750d2fce236862ee" Oct 02 19:00:57 crc kubenswrapper[4832]: I1002 19:00:57.655449 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:00:57 crc kubenswrapper[4832]: E1002 19:00:57.656608 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.156459 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29323861-lrrgj"] Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.160451 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.172184 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29323861-lrrgj"] Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.298795 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-fernet-keys\") pod \"keystone-cron-29323861-lrrgj\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.299254 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-combined-ca-bundle\") pod \"keystone-cron-29323861-lrrgj\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.299444 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgbkx\" (UniqueName: \"kubernetes.io/projected/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-kube-api-access-rgbkx\") pod \"keystone-cron-29323861-lrrgj\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.299572 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-config-data\") pod \"keystone-cron-29323861-lrrgj\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.402374 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-fernet-keys\") pod \"keystone-cron-29323861-lrrgj\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.402675 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-combined-ca-bundle\") pod \"keystone-cron-29323861-lrrgj\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.402754 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgbkx\" (UniqueName: \"kubernetes.io/projected/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-kube-api-access-rgbkx\") pod \"keystone-cron-29323861-lrrgj\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.402788 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-config-data\") pod \"keystone-cron-29323861-lrrgj\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.413628 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-fernet-keys\") pod \"keystone-cron-29323861-lrrgj\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.413836 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-combined-ca-bundle\") pod \"keystone-cron-29323861-lrrgj\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.413914 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-config-data\") pod \"keystone-cron-29323861-lrrgj\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.437784 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgbkx\" (UniqueName: \"kubernetes.io/projected/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-kube-api-access-rgbkx\") pod \"keystone-cron-29323861-lrrgj\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:00 crc kubenswrapper[4832]: I1002 19:01:00.529594 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:01 crc kubenswrapper[4832]: I1002 19:01:01.042733 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29323861-lrrgj"] Oct 02 19:01:01 crc kubenswrapper[4832]: I1002 19:01:01.721079 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29323861-lrrgj" event={"ID":"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d","Type":"ContainerStarted","Data":"0bc59267574b51ccca410f1b0e6638a7800c196ce947bd9e3351cf448125aab6"} Oct 02 19:01:01 crc kubenswrapper[4832]: I1002 19:01:01.721506 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29323861-lrrgj" event={"ID":"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d","Type":"ContainerStarted","Data":"dc385a66aac86fcc326b01815c9230d23b267434fa0be86dc1e625672f438243"} Oct 02 19:01:01 crc kubenswrapper[4832]: I1002 19:01:01.757481 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29323861-lrrgj" podStartSLOduration=1.7574509219999999 podStartE2EDuration="1.757450922s" podCreationTimestamp="2025-10-02 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 19:01:01.739719811 +0000 UTC m=+2418.709162693" watchObservedRunningTime="2025-10-02 19:01:01.757450922 +0000 UTC m=+2418.726893834" Oct 02 19:01:05 crc kubenswrapper[4832]: I1002 19:01:05.771689 4832 generic.go:334] "Generic (PLEG): container finished" podID="d89bc766-c21f-4c7e-a092-3e1db2ed4c9d" containerID="0bc59267574b51ccca410f1b0e6638a7800c196ce947bd9e3351cf448125aab6" exitCode=0 Oct 02 19:01:05 crc kubenswrapper[4832]: I1002 19:01:05.771826 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29323861-lrrgj" event={"ID":"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d","Type":"ContainerDied","Data":"0bc59267574b51ccca410f1b0e6638a7800c196ce947bd9e3351cf448125aab6"} Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.216443 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.320340 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-fernet-keys\") pod \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.320386 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgbkx\" (UniqueName: \"kubernetes.io/projected/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-kube-api-access-rgbkx\") pod \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.320420 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-combined-ca-bundle\") pod \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.320449 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-config-data\") pod \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\" (UID: \"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d\") " Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.327578 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d89bc766-c21f-4c7e-a092-3e1db2ed4c9d" (UID: "d89bc766-c21f-4c7e-a092-3e1db2ed4c9d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.330378 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-kube-api-access-rgbkx" (OuterVolumeSpecName: "kube-api-access-rgbkx") pod "d89bc766-c21f-4c7e-a092-3e1db2ed4c9d" (UID: "d89bc766-c21f-4c7e-a092-3e1db2ed4c9d"). InnerVolumeSpecName "kube-api-access-rgbkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.354521 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d89bc766-c21f-4c7e-a092-3e1db2ed4c9d" (UID: "d89bc766-c21f-4c7e-a092-3e1db2ed4c9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.410685 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-config-data" (OuterVolumeSpecName: "config-data") pod "d89bc766-c21f-4c7e-a092-3e1db2ed4c9d" (UID: "d89bc766-c21f-4c7e-a092-3e1db2ed4c9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.423535 4832 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.423562 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgbkx\" (UniqueName: \"kubernetes.io/projected/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-kube-api-access-rgbkx\") on node \"crc\" DevicePath \"\"" Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.423573 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.424879 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89bc766-c21f-4c7e-a092-3e1db2ed4c9d-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.811078 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29323861-lrrgj" event={"ID":"d89bc766-c21f-4c7e-a092-3e1db2ed4c9d","Type":"ContainerDied","Data":"dc385a66aac86fcc326b01815c9230d23b267434fa0be86dc1e625672f438243"} Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.811136 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc385a66aac86fcc326b01815c9230d23b267434fa0be86dc1e625672f438243" Oct 02 19:01:07 crc kubenswrapper[4832]: I1002 19:01:07.811158 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29323861-lrrgj" Oct 02 19:01:12 crc kubenswrapper[4832]: I1002 19:01:12.223736 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:01:12 crc kubenswrapper[4832]: E1002 19:01:12.224929 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:01:26 crc kubenswrapper[4832]: I1002 19:01:26.223508 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:01:26 crc kubenswrapper[4832]: E1002 19:01:26.224545 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:01:37 crc kubenswrapper[4832]: I1002 19:01:37.224958 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:01:37 crc kubenswrapper[4832]: E1002 19:01:37.225795 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:01:47 crc kubenswrapper[4832]: I1002 19:01:47.375192 4832 generic.go:334] "Generic (PLEG): container finished" podID="09555253-1acb-4af2-a44c-a2a5612465ff" containerID="f0f69da701f71b265bceb9926e1068cbccb585ca93864ae1f2f0d98b1a045099" exitCode=0 Oct 02 19:01:47 crc kubenswrapper[4832]: I1002 19:01:47.375304 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" event={"ID":"09555253-1acb-4af2-a44c-a2a5612465ff","Type":"ContainerDied","Data":"f0f69da701f71b265bceb9926e1068cbccb585ca93864ae1f2f0d98b1a045099"} Oct 02 19:01:48 crc kubenswrapper[4832]: I1002 19:01:48.968718 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.041160 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-ssh-key\") pod \"09555253-1acb-4af2-a44c-a2a5612465ff\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.041415 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-neutron-metadata-combined-ca-bundle\") pod \"09555253-1acb-4af2-a44c-a2a5612465ff\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.041527 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97kk2\" (UniqueName: \"kubernetes.io/projected/09555253-1acb-4af2-a44c-a2a5612465ff-kube-api-access-97kk2\") pod \"09555253-1acb-4af2-a44c-a2a5612465ff\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.041593 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-nova-metadata-neutron-config-0\") pod \"09555253-1acb-4af2-a44c-a2a5612465ff\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.041777 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-neutron-ovn-metadata-agent-neutron-config-0\") pod \"09555253-1acb-4af2-a44c-a2a5612465ff\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.041912 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-inventory\") pod \"09555253-1acb-4af2-a44c-a2a5612465ff\" (UID: \"09555253-1acb-4af2-a44c-a2a5612465ff\") " Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.047166 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09555253-1acb-4af2-a44c-a2a5612465ff-kube-api-access-97kk2" (OuterVolumeSpecName: "kube-api-access-97kk2") pod "09555253-1acb-4af2-a44c-a2a5612465ff" (UID: "09555253-1acb-4af2-a44c-a2a5612465ff"). InnerVolumeSpecName "kube-api-access-97kk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.051209 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "09555253-1acb-4af2-a44c-a2a5612465ff" (UID: "09555253-1acb-4af2-a44c-a2a5612465ff"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.078377 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "09555253-1acb-4af2-a44c-a2a5612465ff" (UID: "09555253-1acb-4af2-a44c-a2a5612465ff"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.078800 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-inventory" (OuterVolumeSpecName: "inventory") pod "09555253-1acb-4af2-a44c-a2a5612465ff" (UID: "09555253-1acb-4af2-a44c-a2a5612465ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.086429 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "09555253-1acb-4af2-a44c-a2a5612465ff" (UID: "09555253-1acb-4af2-a44c-a2a5612465ff"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.114042 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "09555253-1acb-4af2-a44c-a2a5612465ff" (UID: "09555253-1acb-4af2-a44c-a2a5612465ff"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.145453 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.145501 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97kk2\" (UniqueName: \"kubernetes.io/projected/09555253-1acb-4af2-a44c-a2a5612465ff-kube-api-access-97kk2\") on node \"crc\" DevicePath \"\"" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.145516 4832 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.145532 4832 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.145550 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.145562 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09555253-1acb-4af2-a44c-a2a5612465ff-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.405355 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" event={"ID":"09555253-1acb-4af2-a44c-a2a5612465ff","Type":"ContainerDied","Data":"ec9426a7788352a56368b2428aa7ef3bce3802ed3d534c2f9cf5b75380940bca"} Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.405405 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec9426a7788352a56368b2428aa7ef3bce3802ed3d534c2f9cf5b75380940bca" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.405436 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.494306 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8"] Oct 02 19:01:49 crc kubenswrapper[4832]: E1002 19:01:49.494778 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09555253-1acb-4af2-a44c-a2a5612465ff" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.494798 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="09555253-1acb-4af2-a44c-a2a5612465ff" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 02 19:01:49 crc kubenswrapper[4832]: E1002 19:01:49.494832 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89bc766-c21f-4c7e-a092-3e1db2ed4c9d" containerName="keystone-cron" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.494840 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89bc766-c21f-4c7e-a092-3e1db2ed4c9d" containerName="keystone-cron" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.495078 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89bc766-c21f-4c7e-a092-3e1db2ed4c9d" containerName="keystone-cron" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.495102 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="09555253-1acb-4af2-a44c-a2a5612465ff" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.495906 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.499198 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.499203 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.499289 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.499802 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.503366 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.508345 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8"] Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.553392 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.553506 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pktd9\" (UniqueName: \"kubernetes.io/projected/087d2e23-e74a-45de-baf2-2ed44a358880-kube-api-access-pktd9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.553534 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.553723 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.553750 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.654767 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.655191 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.655248 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.656114 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pktd9\" (UniqueName: \"kubernetes.io/projected/087d2e23-e74a-45de-baf2-2ed44a358880-kube-api-access-pktd9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.656203 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.659309 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.659810 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.660635 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.661091 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.676312 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pktd9\" (UniqueName: \"kubernetes.io/projected/087d2e23-e74a-45de-baf2-2ed44a358880-kube-api-access-pktd9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:49 crc kubenswrapper[4832]: I1002 19:01:49.816569 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:01:50 crc kubenswrapper[4832]: I1002 19:01:50.460982 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8"] Oct 02 19:01:51 crc kubenswrapper[4832]: I1002 19:01:51.456810 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" event={"ID":"087d2e23-e74a-45de-baf2-2ed44a358880","Type":"ContainerStarted","Data":"af23b5dca55cf644ed258984fdf7eab23b0baa073b92e97aab7707598dd21041"} Oct 02 19:01:51 crc kubenswrapper[4832]: I1002 19:01:51.457198 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" event={"ID":"087d2e23-e74a-45de-baf2-2ed44a358880","Type":"ContainerStarted","Data":"5049ecfbae3cea65f48f890a3ff8654e3f444398276b2253fde78b2ccb6eef7b"} Oct 02 19:01:51 crc kubenswrapper[4832]: I1002 19:01:51.479620 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" podStartSLOduration=1.992096102 podStartE2EDuration="2.479600213s" podCreationTimestamp="2025-10-02 19:01:49 +0000 UTC" firstStartedPulling="2025-10-02 19:01:50.463996097 +0000 UTC m=+2467.433438969" lastFinishedPulling="2025-10-02 19:01:50.951500158 +0000 UTC m=+2467.920943080" observedRunningTime="2025-10-02 19:01:51.471825781 +0000 UTC m=+2468.441268653" watchObservedRunningTime="2025-10-02 19:01:51.479600213 +0000 UTC m=+2468.449043085" Oct 02 19:01:52 crc kubenswrapper[4832]: I1002 19:01:52.231792 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:01:52 crc kubenswrapper[4832]: E1002 19:01:52.232345 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:02:06 crc kubenswrapper[4832]: I1002 19:02:06.223473 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:02:06 crc kubenswrapper[4832]: E1002 19:02:06.224487 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:02:18 crc kubenswrapper[4832]: I1002 19:02:18.223588 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:02:18 crc kubenswrapper[4832]: E1002 19:02:18.224510 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:02:32 crc kubenswrapper[4832]: I1002 19:02:32.224197 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:02:32 crc kubenswrapper[4832]: E1002 19:02:32.225826 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:02:45 crc kubenswrapper[4832]: I1002 19:02:45.236793 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:02:45 crc kubenswrapper[4832]: E1002 19:02:45.238157 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:02:57 crc kubenswrapper[4832]: I1002 19:02:57.223339 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:02:57 crc kubenswrapper[4832]: E1002 19:02:57.224439 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:02:57 crc kubenswrapper[4832]: I1002 19:02:57.768326 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-92j49"] Oct 02 19:02:57 crc kubenswrapper[4832]: I1002 19:02:57.770824 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:02:57 crc kubenswrapper[4832]: I1002 19:02:57.798190 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j49"] Oct 02 19:02:57 crc kubenswrapper[4832]: I1002 19:02:57.821149 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c2b06b-2635-4e70-a63e-285da7ee931d-catalog-content\") pod \"redhat-marketplace-92j49\" (UID: \"95c2b06b-2635-4e70-a63e-285da7ee931d\") " pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:02:57 crc kubenswrapper[4832]: I1002 19:02:57.821389 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c2b06b-2635-4e70-a63e-285da7ee931d-utilities\") pod \"redhat-marketplace-92j49\" (UID: \"95c2b06b-2635-4e70-a63e-285da7ee931d\") " pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:02:57 crc kubenswrapper[4832]: I1002 19:02:57.821463 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7w7b\" (UniqueName: \"kubernetes.io/projected/95c2b06b-2635-4e70-a63e-285da7ee931d-kube-api-access-r7w7b\") pod \"redhat-marketplace-92j49\" (UID: \"95c2b06b-2635-4e70-a63e-285da7ee931d\") " pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:02:57 crc kubenswrapper[4832]: I1002 19:02:57.923779 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c2b06b-2635-4e70-a63e-285da7ee931d-utilities\") pod \"redhat-marketplace-92j49\" (UID: \"95c2b06b-2635-4e70-a63e-285da7ee931d\") " pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:02:57 crc kubenswrapper[4832]: I1002 19:02:57.923862 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7w7b\" (UniqueName: \"kubernetes.io/projected/95c2b06b-2635-4e70-a63e-285da7ee931d-kube-api-access-r7w7b\") pod \"redhat-marketplace-92j49\" (UID: \"95c2b06b-2635-4e70-a63e-285da7ee931d\") " pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:02:57 crc kubenswrapper[4832]: I1002 19:02:57.923944 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c2b06b-2635-4e70-a63e-285da7ee931d-catalog-content\") pod \"redhat-marketplace-92j49\" (UID: \"95c2b06b-2635-4e70-a63e-285da7ee931d\") " pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:02:57 crc kubenswrapper[4832]: I1002 19:02:57.924313 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c2b06b-2635-4e70-a63e-285da7ee931d-utilities\") pod \"redhat-marketplace-92j49\" (UID: \"95c2b06b-2635-4e70-a63e-285da7ee931d\") " pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:02:57 crc kubenswrapper[4832]: I1002 19:02:57.924369 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c2b06b-2635-4e70-a63e-285da7ee931d-catalog-content\") pod \"redhat-marketplace-92j49\" (UID: \"95c2b06b-2635-4e70-a63e-285da7ee931d\") " pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:02:57 crc kubenswrapper[4832]: I1002 19:02:57.960594 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7w7b\" (UniqueName: \"kubernetes.io/projected/95c2b06b-2635-4e70-a63e-285da7ee931d-kube-api-access-r7w7b\") pod \"redhat-marketplace-92j49\" (UID: \"95c2b06b-2635-4e70-a63e-285da7ee931d\") " pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:02:58 crc kubenswrapper[4832]: I1002 19:02:58.120789 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:02:58 crc kubenswrapper[4832]: I1002 19:02:58.601531 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j49"] Oct 02 19:02:59 crc kubenswrapper[4832]: I1002 19:02:59.329239 4832 generic.go:334] "Generic (PLEG): container finished" podID="95c2b06b-2635-4e70-a63e-285da7ee931d" containerID="35e1ab8ccc1870dfaa662a202ca4a0318bdbba545000b230c205d5ce7deb46f1" exitCode=0 Oct 02 19:02:59 crc kubenswrapper[4832]: I1002 19:02:59.329287 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j49" event={"ID":"95c2b06b-2635-4e70-a63e-285da7ee931d","Type":"ContainerDied","Data":"35e1ab8ccc1870dfaa662a202ca4a0318bdbba545000b230c205d5ce7deb46f1"} Oct 02 19:02:59 crc kubenswrapper[4832]: I1002 19:02:59.329627 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j49" event={"ID":"95c2b06b-2635-4e70-a63e-285da7ee931d","Type":"ContainerStarted","Data":"58ea599a112d6484d01cb6e0f86e02d54ec51948ee569df38b2f804f50fc93f0"} Oct 02 19:03:01 crc kubenswrapper[4832]: I1002 19:03:01.361211 4832 generic.go:334] "Generic (PLEG): container finished" podID="95c2b06b-2635-4e70-a63e-285da7ee931d" containerID="8e0f54cb663decba80c1c17aa7aece3446777d5f2e5a8c4c7a7724fda2448fd4" exitCode=0 Oct 02 19:03:01 crc kubenswrapper[4832]: I1002 19:03:01.361330 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j49" event={"ID":"95c2b06b-2635-4e70-a63e-285da7ee931d","Type":"ContainerDied","Data":"8e0f54cb663decba80c1c17aa7aece3446777d5f2e5a8c4c7a7724fda2448fd4"} Oct 02 19:03:02 crc kubenswrapper[4832]: I1002 19:03:02.391977 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j49" event={"ID":"95c2b06b-2635-4e70-a63e-285da7ee931d","Type":"ContainerStarted","Data":"769326c3af7800a46d582c79bee14750cee7e1540987813965f82ba623f0e73c"} Oct 02 19:03:02 crc kubenswrapper[4832]: I1002 19:03:02.433550 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-92j49" podStartSLOduration=2.893815156 podStartE2EDuration="5.433524601s" podCreationTimestamp="2025-10-02 19:02:57 +0000 UTC" firstStartedPulling="2025-10-02 19:02:59.332459798 +0000 UTC m=+2536.301902670" lastFinishedPulling="2025-10-02 19:03:01.872169213 +0000 UTC m=+2538.841612115" observedRunningTime="2025-10-02 19:03:02.416438849 +0000 UTC m=+2539.385881721" watchObservedRunningTime="2025-10-02 19:03:02.433524601 +0000 UTC m=+2539.402967503" Oct 02 19:03:08 crc kubenswrapper[4832]: I1002 19:03:08.121791 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:03:08 crc kubenswrapper[4832]: I1002 19:03:08.123850 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:03:08 crc kubenswrapper[4832]: I1002 19:03:08.192527 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:03:08 crc kubenswrapper[4832]: I1002 19:03:08.223690 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:03:08 crc kubenswrapper[4832]: E1002 19:03:08.224200 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:03:08 crc kubenswrapper[4832]: I1002 19:03:08.550607 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:03:08 crc kubenswrapper[4832]: I1002 19:03:08.623069 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j49"] Oct 02 19:03:10 crc kubenswrapper[4832]: I1002 19:03:10.496949 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-92j49" podUID="95c2b06b-2635-4e70-a63e-285da7ee931d" containerName="registry-server" containerID="cri-o://769326c3af7800a46d582c79bee14750cee7e1540987813965f82ba623f0e73c" gracePeriod=2 Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.022929 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.086342 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7w7b\" (UniqueName: \"kubernetes.io/projected/95c2b06b-2635-4e70-a63e-285da7ee931d-kube-api-access-r7w7b\") pod \"95c2b06b-2635-4e70-a63e-285da7ee931d\" (UID: \"95c2b06b-2635-4e70-a63e-285da7ee931d\") " Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.086438 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c2b06b-2635-4e70-a63e-285da7ee931d-utilities\") pod \"95c2b06b-2635-4e70-a63e-285da7ee931d\" (UID: \"95c2b06b-2635-4e70-a63e-285da7ee931d\") " Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.086490 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c2b06b-2635-4e70-a63e-285da7ee931d-catalog-content\") pod \"95c2b06b-2635-4e70-a63e-285da7ee931d\" (UID: \"95c2b06b-2635-4e70-a63e-285da7ee931d\") " Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.090945 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95c2b06b-2635-4e70-a63e-285da7ee931d-utilities" (OuterVolumeSpecName: "utilities") pod "95c2b06b-2635-4e70-a63e-285da7ee931d" (UID: "95c2b06b-2635-4e70-a63e-285da7ee931d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.095735 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c2b06b-2635-4e70-a63e-285da7ee931d-kube-api-access-r7w7b" (OuterVolumeSpecName: "kube-api-access-r7w7b") pod "95c2b06b-2635-4e70-a63e-285da7ee931d" (UID: "95c2b06b-2635-4e70-a63e-285da7ee931d"). InnerVolumeSpecName "kube-api-access-r7w7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.105619 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95c2b06b-2635-4e70-a63e-285da7ee931d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95c2b06b-2635-4e70-a63e-285da7ee931d" (UID: "95c2b06b-2635-4e70-a63e-285da7ee931d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.189796 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7w7b\" (UniqueName: \"kubernetes.io/projected/95c2b06b-2635-4e70-a63e-285da7ee931d-kube-api-access-r7w7b\") on node \"crc\" DevicePath \"\"" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.189850 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c2b06b-2635-4e70-a63e-285da7ee931d-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.189870 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c2b06b-2635-4e70-a63e-285da7ee931d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.511738 4832 generic.go:334] "Generic (PLEG): container finished" podID="95c2b06b-2635-4e70-a63e-285da7ee931d" containerID="769326c3af7800a46d582c79bee14750cee7e1540987813965f82ba623f0e73c" exitCode=0 Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.511779 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j49" event={"ID":"95c2b06b-2635-4e70-a63e-285da7ee931d","Type":"ContainerDied","Data":"769326c3af7800a46d582c79bee14750cee7e1540987813965f82ba623f0e73c"} Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.511891 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92j49" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.512613 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j49" event={"ID":"95c2b06b-2635-4e70-a63e-285da7ee931d","Type":"ContainerDied","Data":"58ea599a112d6484d01cb6e0f86e02d54ec51948ee569df38b2f804f50fc93f0"} Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.512697 4832 scope.go:117] "RemoveContainer" containerID="769326c3af7800a46d582c79bee14750cee7e1540987813965f82ba623f0e73c" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.551589 4832 scope.go:117] "RemoveContainer" containerID="8e0f54cb663decba80c1c17aa7aece3446777d5f2e5a8c4c7a7724fda2448fd4" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.565333 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j49"] Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.579663 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j49"] Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.597826 4832 scope.go:117] "RemoveContainer" containerID="35e1ab8ccc1870dfaa662a202ca4a0318bdbba545000b230c205d5ce7deb46f1" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.674166 4832 scope.go:117] "RemoveContainer" containerID="769326c3af7800a46d582c79bee14750cee7e1540987813965f82ba623f0e73c" Oct 02 19:03:11 crc kubenswrapper[4832]: E1002 19:03:11.674863 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"769326c3af7800a46d582c79bee14750cee7e1540987813965f82ba623f0e73c\": container with ID starting with 769326c3af7800a46d582c79bee14750cee7e1540987813965f82ba623f0e73c not found: ID does not exist" containerID="769326c3af7800a46d582c79bee14750cee7e1540987813965f82ba623f0e73c" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.674930 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"769326c3af7800a46d582c79bee14750cee7e1540987813965f82ba623f0e73c"} err="failed to get container status \"769326c3af7800a46d582c79bee14750cee7e1540987813965f82ba623f0e73c\": rpc error: code = NotFound desc = could not find container \"769326c3af7800a46d582c79bee14750cee7e1540987813965f82ba623f0e73c\": container with ID starting with 769326c3af7800a46d582c79bee14750cee7e1540987813965f82ba623f0e73c not found: ID does not exist" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.674975 4832 scope.go:117] "RemoveContainer" containerID="8e0f54cb663decba80c1c17aa7aece3446777d5f2e5a8c4c7a7724fda2448fd4" Oct 02 19:03:11 crc kubenswrapper[4832]: E1002 19:03:11.675470 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e0f54cb663decba80c1c17aa7aece3446777d5f2e5a8c4c7a7724fda2448fd4\": container with ID starting with 8e0f54cb663decba80c1c17aa7aece3446777d5f2e5a8c4c7a7724fda2448fd4 not found: ID does not exist" containerID="8e0f54cb663decba80c1c17aa7aece3446777d5f2e5a8c4c7a7724fda2448fd4" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.675528 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e0f54cb663decba80c1c17aa7aece3446777d5f2e5a8c4c7a7724fda2448fd4"} err="failed to get container status \"8e0f54cb663decba80c1c17aa7aece3446777d5f2e5a8c4c7a7724fda2448fd4\": rpc error: code = NotFound desc = could not find container \"8e0f54cb663decba80c1c17aa7aece3446777d5f2e5a8c4c7a7724fda2448fd4\": container with ID starting with 8e0f54cb663decba80c1c17aa7aece3446777d5f2e5a8c4c7a7724fda2448fd4 not found: ID does not exist" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.675572 4832 scope.go:117] "RemoveContainer" containerID="35e1ab8ccc1870dfaa662a202ca4a0318bdbba545000b230c205d5ce7deb46f1" Oct 02 19:03:11 crc kubenswrapper[4832]: E1002 19:03:11.676094 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35e1ab8ccc1870dfaa662a202ca4a0318bdbba545000b230c205d5ce7deb46f1\": container with ID starting with 35e1ab8ccc1870dfaa662a202ca4a0318bdbba545000b230c205d5ce7deb46f1 not found: ID does not exist" containerID="35e1ab8ccc1870dfaa662a202ca4a0318bdbba545000b230c205d5ce7deb46f1" Oct 02 19:03:11 crc kubenswrapper[4832]: I1002 19:03:11.676176 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e1ab8ccc1870dfaa662a202ca4a0318bdbba545000b230c205d5ce7deb46f1"} err="failed to get container status \"35e1ab8ccc1870dfaa662a202ca4a0318bdbba545000b230c205d5ce7deb46f1\": rpc error: code = NotFound desc = could not find container \"35e1ab8ccc1870dfaa662a202ca4a0318bdbba545000b230c205d5ce7deb46f1\": container with ID starting with 35e1ab8ccc1870dfaa662a202ca4a0318bdbba545000b230c205d5ce7deb46f1 not found: ID does not exist" Oct 02 19:03:13 crc kubenswrapper[4832]: I1002 19:03:13.246493 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95c2b06b-2635-4e70-a63e-285da7ee931d" path="/var/lib/kubelet/pods/95c2b06b-2635-4e70-a63e-285da7ee931d/volumes" Oct 02 19:03:22 crc kubenswrapper[4832]: I1002 19:03:22.224650 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:03:22 crc kubenswrapper[4832]: E1002 19:03:22.227378 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:03:34 crc kubenswrapper[4832]: I1002 19:03:34.223921 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:03:34 crc kubenswrapper[4832]: E1002 19:03:34.224979 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:03:47 crc kubenswrapper[4832]: I1002 19:03:47.223670 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:03:47 crc kubenswrapper[4832]: E1002 19:03:47.224971 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:03:58 crc kubenswrapper[4832]: I1002 19:03:58.223687 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:03:58 crc kubenswrapper[4832]: E1002 19:03:58.224469 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:04:09 crc kubenswrapper[4832]: I1002 19:04:09.224082 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:04:09 crc kubenswrapper[4832]: E1002 19:04:09.225533 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:04:24 crc kubenswrapper[4832]: I1002 19:04:24.223397 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:04:24 crc kubenswrapper[4832]: E1002 19:04:24.224147 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:04:38 crc kubenswrapper[4832]: I1002 19:04:38.224066 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:04:38 crc kubenswrapper[4832]: E1002 19:04:38.226151 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:04:51 crc kubenswrapper[4832]: I1002 19:04:51.223924 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:04:51 crc kubenswrapper[4832]: E1002 19:04:51.225501 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:05:05 crc kubenswrapper[4832]: I1002 19:05:05.238160 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:05:05 crc kubenswrapper[4832]: E1002 19:05:05.239619 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:05:19 crc kubenswrapper[4832]: I1002 19:05:19.223502 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:05:19 crc kubenswrapper[4832]: E1002 19:05:19.224466 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:05:32 crc kubenswrapper[4832]: I1002 19:05:32.224292 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:05:32 crc kubenswrapper[4832]: E1002 19:05:32.225314 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:05:45 crc kubenswrapper[4832]: I1002 19:05:45.242207 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:05:45 crc kubenswrapper[4832]: E1002 19:05:45.243525 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:05:56 crc kubenswrapper[4832]: I1002 19:05:56.223840 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:05:56 crc kubenswrapper[4832]: E1002 19:05:56.225321 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:06:10 crc kubenswrapper[4832]: I1002 19:06:10.223310 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:06:10 crc kubenswrapper[4832]: I1002 19:06:10.893959 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"0d262d6d86f361a7adda18b96b7965dcaf322b6f42ae0accdc6a02032608db0b"} Oct 02 19:06:21 crc kubenswrapper[4832]: I1002 19:06:21.030116 4832 generic.go:334] "Generic (PLEG): container finished" podID="087d2e23-e74a-45de-baf2-2ed44a358880" containerID="af23b5dca55cf644ed258984fdf7eab23b0baa073b92e97aab7707598dd21041" exitCode=0 Oct 02 19:06:21 crc kubenswrapper[4832]: I1002 19:06:21.030258 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" event={"ID":"087d2e23-e74a-45de-baf2-2ed44a358880","Type":"ContainerDied","Data":"af23b5dca55cf644ed258984fdf7eab23b0baa073b92e97aab7707598dd21041"} Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.505705 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.706044 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-libvirt-combined-ca-bundle\") pod \"087d2e23-e74a-45de-baf2-2ed44a358880\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.706207 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-inventory\") pod \"087d2e23-e74a-45de-baf2-2ed44a358880\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.706236 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-ssh-key\") pod \"087d2e23-e74a-45de-baf2-2ed44a358880\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.706303 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-libvirt-secret-0\") pod \"087d2e23-e74a-45de-baf2-2ed44a358880\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.706337 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pktd9\" (UniqueName: \"kubernetes.io/projected/087d2e23-e74a-45de-baf2-2ed44a358880-kube-api-access-pktd9\") pod \"087d2e23-e74a-45de-baf2-2ed44a358880\" (UID: \"087d2e23-e74a-45de-baf2-2ed44a358880\") " Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.714448 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "087d2e23-e74a-45de-baf2-2ed44a358880" (UID: "087d2e23-e74a-45de-baf2-2ed44a358880"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.722553 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/087d2e23-e74a-45de-baf2-2ed44a358880-kube-api-access-pktd9" (OuterVolumeSpecName: "kube-api-access-pktd9") pod "087d2e23-e74a-45de-baf2-2ed44a358880" (UID: "087d2e23-e74a-45de-baf2-2ed44a358880"). InnerVolumeSpecName "kube-api-access-pktd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.748393 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-inventory" (OuterVolumeSpecName: "inventory") pod "087d2e23-e74a-45de-baf2-2ed44a358880" (UID: "087d2e23-e74a-45de-baf2-2ed44a358880"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.752376 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "087d2e23-e74a-45de-baf2-2ed44a358880" (UID: "087d2e23-e74a-45de-baf2-2ed44a358880"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.759917 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "087d2e23-e74a-45de-baf2-2ed44a358880" (UID: "087d2e23-e74a-45de-baf2-2ed44a358880"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.809819 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.810098 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.810202 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.810344 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pktd9\" (UniqueName: \"kubernetes.io/projected/087d2e23-e74a-45de-baf2-2ed44a358880-kube-api-access-pktd9\") on node \"crc\" DevicePath \"\"" Oct 02 19:06:22 crc kubenswrapper[4832]: I1002 19:06:22.810450 4832 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/087d2e23-e74a-45de-baf2-2ed44a358880-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.069025 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" event={"ID":"087d2e23-e74a-45de-baf2-2ed44a358880","Type":"ContainerDied","Data":"5049ecfbae3cea65f48f890a3ff8654e3f444398276b2253fde78b2ccb6eef7b"} Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.069073 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5049ecfbae3cea65f48f890a3ff8654e3f444398276b2253fde78b2ccb6eef7b" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.069081 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.184435 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2"] Oct 02 19:06:23 crc kubenswrapper[4832]: E1002 19:06:23.185139 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="087d2e23-e74a-45de-baf2-2ed44a358880" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.185158 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="087d2e23-e74a-45de-baf2-2ed44a358880" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 02 19:06:23 crc kubenswrapper[4832]: E1002 19:06:23.185174 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c2b06b-2635-4e70-a63e-285da7ee931d" containerName="extract-content" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.185182 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c2b06b-2635-4e70-a63e-285da7ee931d" containerName="extract-content" Oct 02 19:06:23 crc kubenswrapper[4832]: E1002 19:06:23.185209 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c2b06b-2635-4e70-a63e-285da7ee931d" containerName="registry-server" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.185218 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c2b06b-2635-4e70-a63e-285da7ee931d" containerName="registry-server" Oct 02 19:06:23 crc kubenswrapper[4832]: E1002 19:06:23.185235 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c2b06b-2635-4e70-a63e-285da7ee931d" containerName="extract-utilities" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.185243 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c2b06b-2635-4e70-a63e-285da7ee931d" containerName="extract-utilities" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.185661 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="087d2e23-e74a-45de-baf2-2ed44a358880" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.185707 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c2b06b-2635-4e70-a63e-285da7ee931d" containerName="registry-server" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.188443 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.193337 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2"] Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.210776 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.210892 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.210949 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.211005 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.211073 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.211074 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.211190 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.218493 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.218561 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qpq5\" (UniqueName: \"kubernetes.io/projected/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-kube-api-access-4qpq5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.218595 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.218631 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.218674 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.218735 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.218807 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.218829 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.218878 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.320553 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qpq5\" (UniqueName: \"kubernetes.io/projected/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-kube-api-access-4qpq5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.320611 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.320639 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.320678 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.320740 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.320816 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.320840 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.320883 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.321022 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.321921 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.325444 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.325647 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.325658 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.326009 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.326018 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.326450 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.327667 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.336462 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qpq5\" (UniqueName: \"kubernetes.io/projected/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-kube-api-access-4qpq5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l5mp2\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:23 crc kubenswrapper[4832]: I1002 19:06:23.527419 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:06:24 crc kubenswrapper[4832]: I1002 19:06:24.125165 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2"] Oct 02 19:06:24 crc kubenswrapper[4832]: W1002 19:06:24.145210 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26e8352e_0e5b_4ee9_83f5_aa3323948a6d.slice/crio-8633ad4109f692ee80e4249d396a7421df83caa11896ddffee2337b9663e4470 WatchSource:0}: Error finding container 8633ad4109f692ee80e4249d396a7421df83caa11896ddffee2337b9663e4470: Status 404 returned error can't find the container with id 8633ad4109f692ee80e4249d396a7421df83caa11896ddffee2337b9663e4470 Oct 02 19:06:24 crc kubenswrapper[4832]: I1002 19:06:24.153338 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 19:06:25 crc kubenswrapper[4832]: I1002 19:06:25.092391 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" event={"ID":"26e8352e-0e5b-4ee9-83f5-aa3323948a6d","Type":"ContainerStarted","Data":"62d17ccb2ca210d8115829617512fec877b4d3ea5ed74b74bd9e33b2318f8940"} Oct 02 19:06:25 crc kubenswrapper[4832]: I1002 19:06:25.092775 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" event={"ID":"26e8352e-0e5b-4ee9-83f5-aa3323948a6d","Type":"ContainerStarted","Data":"8633ad4109f692ee80e4249d396a7421df83caa11896ddffee2337b9663e4470"} Oct 02 19:06:25 crc kubenswrapper[4832]: I1002 19:06:25.130031 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" podStartSLOduration=1.688838481 podStartE2EDuration="2.130008721s" podCreationTimestamp="2025-10-02 19:06:23 +0000 UTC" firstStartedPulling="2025-10-02 19:06:24.153001337 +0000 UTC m=+2741.122444229" lastFinishedPulling="2025-10-02 19:06:24.594171597 +0000 UTC m=+2741.563614469" observedRunningTime="2025-10-02 19:06:25.119588247 +0000 UTC m=+2742.089031139" watchObservedRunningTime="2025-10-02 19:06:25.130008721 +0000 UTC m=+2742.099451603" Oct 02 19:06:28 crc kubenswrapper[4832]: I1002 19:06:28.442310 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-52j5t"] Oct 02 19:06:28 crc kubenswrapper[4832]: I1002 19:06:28.449968 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:28 crc kubenswrapper[4832]: I1002 19:06:28.461307 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52j5t"] Oct 02 19:06:28 crc kubenswrapper[4832]: I1002 19:06:28.516525 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5cf4dd5-04ec-4340-a34e-35bd4029454f-catalog-content\") pod \"community-operators-52j5t\" (UID: \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\") " pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:28 crc kubenswrapper[4832]: I1002 19:06:28.517062 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5cf4dd5-04ec-4340-a34e-35bd4029454f-utilities\") pod \"community-operators-52j5t\" (UID: \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\") " pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:28 crc kubenswrapper[4832]: I1002 19:06:28.517158 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f4l8\" (UniqueName: \"kubernetes.io/projected/a5cf4dd5-04ec-4340-a34e-35bd4029454f-kube-api-access-9f4l8\") pod \"community-operators-52j5t\" (UID: \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\") " pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:28 crc kubenswrapper[4832]: I1002 19:06:28.619593 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5cf4dd5-04ec-4340-a34e-35bd4029454f-catalog-content\") pod \"community-operators-52j5t\" (UID: \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\") " pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:28 crc kubenswrapper[4832]: I1002 19:06:28.620218 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5cf4dd5-04ec-4340-a34e-35bd4029454f-utilities\") pod \"community-operators-52j5t\" (UID: \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\") " pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:28 crc kubenswrapper[4832]: I1002 19:06:28.620353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f4l8\" (UniqueName: \"kubernetes.io/projected/a5cf4dd5-04ec-4340-a34e-35bd4029454f-kube-api-access-9f4l8\") pod \"community-operators-52j5t\" (UID: \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\") " pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:28 crc kubenswrapper[4832]: I1002 19:06:28.620454 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5cf4dd5-04ec-4340-a34e-35bd4029454f-catalog-content\") pod \"community-operators-52j5t\" (UID: \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\") " pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:28 crc kubenswrapper[4832]: I1002 19:06:28.620647 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5cf4dd5-04ec-4340-a34e-35bd4029454f-utilities\") pod \"community-operators-52j5t\" (UID: \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\") " pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:28 crc kubenswrapper[4832]: I1002 19:06:28.641296 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f4l8\" (UniqueName: \"kubernetes.io/projected/a5cf4dd5-04ec-4340-a34e-35bd4029454f-kube-api-access-9f4l8\") pod \"community-operators-52j5t\" (UID: \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\") " pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:28 crc kubenswrapper[4832]: I1002 19:06:28.782998 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:29 crc kubenswrapper[4832]: I1002 19:06:29.366741 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52j5t"] Oct 02 19:06:29 crc kubenswrapper[4832]: W1002 19:06:29.378509 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5cf4dd5_04ec_4340_a34e_35bd4029454f.slice/crio-357a89172c4a3486eb13735294a3ed10cbf9bf7759121c10f6cded89db31380e WatchSource:0}: Error finding container 357a89172c4a3486eb13735294a3ed10cbf9bf7759121c10f6cded89db31380e: Status 404 returned error can't find the container with id 357a89172c4a3486eb13735294a3ed10cbf9bf7759121c10f6cded89db31380e Oct 02 19:06:30 crc kubenswrapper[4832]: I1002 19:06:30.147843 4832 generic.go:334] "Generic (PLEG): container finished" podID="a5cf4dd5-04ec-4340-a34e-35bd4029454f" containerID="92d45bdf052ea69c2ea0f616513d3b80f46f9214df9aa74fe6b077c9b2f6c4d1" exitCode=0 Oct 02 19:06:30 crc kubenswrapper[4832]: I1002 19:06:30.147923 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52j5t" event={"ID":"a5cf4dd5-04ec-4340-a34e-35bd4029454f","Type":"ContainerDied","Data":"92d45bdf052ea69c2ea0f616513d3b80f46f9214df9aa74fe6b077c9b2f6c4d1"} Oct 02 19:06:30 crc kubenswrapper[4832]: I1002 19:06:30.148174 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52j5t" event={"ID":"a5cf4dd5-04ec-4340-a34e-35bd4029454f","Type":"ContainerStarted","Data":"357a89172c4a3486eb13735294a3ed10cbf9bf7759121c10f6cded89db31380e"} Oct 02 19:06:32 crc kubenswrapper[4832]: I1002 19:06:32.184198 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52j5t" event={"ID":"a5cf4dd5-04ec-4340-a34e-35bd4029454f","Type":"ContainerStarted","Data":"b0882ca91d0dcf0b684635b95c61112d3fe7a008aef10583865292db6520d828"} Oct 02 19:06:33 crc kubenswrapper[4832]: I1002 19:06:33.204004 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52j5t" event={"ID":"a5cf4dd5-04ec-4340-a34e-35bd4029454f","Type":"ContainerDied","Data":"b0882ca91d0dcf0b684635b95c61112d3fe7a008aef10583865292db6520d828"} Oct 02 19:06:33 crc kubenswrapper[4832]: I1002 19:06:33.203870 4832 generic.go:334] "Generic (PLEG): container finished" podID="a5cf4dd5-04ec-4340-a34e-35bd4029454f" containerID="b0882ca91d0dcf0b684635b95c61112d3fe7a008aef10583865292db6520d828" exitCode=0 Oct 02 19:06:34 crc kubenswrapper[4832]: I1002 19:06:34.226828 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52j5t" event={"ID":"a5cf4dd5-04ec-4340-a34e-35bd4029454f","Type":"ContainerStarted","Data":"50976fe1594b19036cd5222db6b5907073fcd5c1d9a5248240dccd7e7526ab1c"} Oct 02 19:06:34 crc kubenswrapper[4832]: I1002 19:06:34.259203 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-52j5t" podStartSLOduration=2.685578449 podStartE2EDuration="6.25918381s" podCreationTimestamp="2025-10-02 19:06:28 +0000 UTC" firstStartedPulling="2025-10-02 19:06:30.14987236 +0000 UTC m=+2747.119315232" lastFinishedPulling="2025-10-02 19:06:33.723477681 +0000 UTC m=+2750.692920593" observedRunningTime="2025-10-02 19:06:34.25563583 +0000 UTC m=+2751.225078702" watchObservedRunningTime="2025-10-02 19:06:34.25918381 +0000 UTC m=+2751.228626682" Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.312844 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7qkfh"] Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.333159 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.335855 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qkfh"] Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.377777 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-catalog-content\") pod \"certified-operators-7qkfh\" (UID: \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\") " pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.378125 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-utilities\") pod \"certified-operators-7qkfh\" (UID: \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\") " pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.378174 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgpmt\" (UniqueName: \"kubernetes.io/projected/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-kube-api-access-jgpmt\") pod \"certified-operators-7qkfh\" (UID: \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\") " pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.480471 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-catalog-content\") pod \"certified-operators-7qkfh\" (UID: \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\") " pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.480637 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-utilities\") pod \"certified-operators-7qkfh\" (UID: \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\") " pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.480668 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgpmt\" (UniqueName: \"kubernetes.io/projected/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-kube-api-access-jgpmt\") pod \"certified-operators-7qkfh\" (UID: \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\") " pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.481296 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-catalog-content\") pod \"certified-operators-7qkfh\" (UID: \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\") " pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.482005 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-utilities\") pod \"certified-operators-7qkfh\" (UID: \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\") " pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.508721 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgpmt\" (UniqueName: \"kubernetes.io/projected/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-kube-api-access-jgpmt\") pod \"certified-operators-7qkfh\" (UID: \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\") " pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.653768 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.783099 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.783169 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:38 crc kubenswrapper[4832]: I1002 19:06:38.863247 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:39 crc kubenswrapper[4832]: I1002 19:06:39.291296 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qkfh"] Oct 02 19:06:39 crc kubenswrapper[4832]: W1002 19:06:39.293243 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5caf7b7_6de5_4f52_8e3f_570ff50c59c4.slice/crio-05e09ec9533e02b60c9783ba278d7bbf801c2a91b5a5a85102a6d8a4b3f60b9b WatchSource:0}: Error finding container 05e09ec9533e02b60c9783ba278d7bbf801c2a91b5a5a85102a6d8a4b3f60b9b: Status 404 returned error can't find the container with id 05e09ec9533e02b60c9783ba278d7bbf801c2a91b5a5a85102a6d8a4b3f60b9b Oct 02 19:06:39 crc kubenswrapper[4832]: I1002 19:06:39.346008 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:40 crc kubenswrapper[4832]: I1002 19:06:40.302769 4832 generic.go:334] "Generic (PLEG): container finished" podID="f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" containerID="ac95cffaad96edbdc1ec44dda83faff76031389d1eb3b78f23b7951f3d0c5982" exitCode=0 Oct 02 19:06:40 crc kubenswrapper[4832]: I1002 19:06:40.302885 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qkfh" event={"ID":"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4","Type":"ContainerDied","Data":"ac95cffaad96edbdc1ec44dda83faff76031389d1eb3b78f23b7951f3d0c5982"} Oct 02 19:06:40 crc kubenswrapper[4832]: I1002 19:06:40.302946 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qkfh" event={"ID":"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4","Type":"ContainerStarted","Data":"05e09ec9533e02b60c9783ba278d7bbf801c2a91b5a5a85102a6d8a4b3f60b9b"} Oct 02 19:06:41 crc kubenswrapper[4832]: I1002 19:06:41.288727 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52j5t"] Oct 02 19:06:41 crc kubenswrapper[4832]: I1002 19:06:41.313074 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-52j5t" podUID="a5cf4dd5-04ec-4340-a34e-35bd4029454f" containerName="registry-server" containerID="cri-o://50976fe1594b19036cd5222db6b5907073fcd5c1d9a5248240dccd7e7526ab1c" gracePeriod=2 Oct 02 19:06:41 crc kubenswrapper[4832]: I1002 19:06:41.803732 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:41 crc kubenswrapper[4832]: I1002 19:06:41.966927 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f4l8\" (UniqueName: \"kubernetes.io/projected/a5cf4dd5-04ec-4340-a34e-35bd4029454f-kube-api-access-9f4l8\") pod \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\" (UID: \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\") " Oct 02 19:06:41 crc kubenswrapper[4832]: I1002 19:06:41.967131 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5cf4dd5-04ec-4340-a34e-35bd4029454f-utilities\") pod \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\" (UID: \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\") " Oct 02 19:06:41 crc kubenswrapper[4832]: I1002 19:06:41.967204 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5cf4dd5-04ec-4340-a34e-35bd4029454f-catalog-content\") pod \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\" (UID: \"a5cf4dd5-04ec-4340-a34e-35bd4029454f\") " Oct 02 19:06:41 crc kubenswrapper[4832]: I1002 19:06:41.968111 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5cf4dd5-04ec-4340-a34e-35bd4029454f-utilities" (OuterVolumeSpecName: "utilities") pod "a5cf4dd5-04ec-4340-a34e-35bd4029454f" (UID: "a5cf4dd5-04ec-4340-a34e-35bd4029454f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:06:41 crc kubenswrapper[4832]: I1002 19:06:41.975744 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5cf4dd5-04ec-4340-a34e-35bd4029454f-kube-api-access-9f4l8" (OuterVolumeSpecName: "kube-api-access-9f4l8") pod "a5cf4dd5-04ec-4340-a34e-35bd4029454f" (UID: "a5cf4dd5-04ec-4340-a34e-35bd4029454f"). InnerVolumeSpecName "kube-api-access-9f4l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.009703 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5cf4dd5-04ec-4340-a34e-35bd4029454f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5cf4dd5-04ec-4340-a34e-35bd4029454f" (UID: "a5cf4dd5-04ec-4340-a34e-35bd4029454f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.070211 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f4l8\" (UniqueName: \"kubernetes.io/projected/a5cf4dd5-04ec-4340-a34e-35bd4029454f-kube-api-access-9f4l8\") on node \"crc\" DevicePath \"\"" Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.070244 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5cf4dd5-04ec-4340-a34e-35bd4029454f-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.070254 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5cf4dd5-04ec-4340-a34e-35bd4029454f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.339693 4832 generic.go:334] "Generic (PLEG): container finished" podID="a5cf4dd5-04ec-4340-a34e-35bd4029454f" containerID="50976fe1594b19036cd5222db6b5907073fcd5c1d9a5248240dccd7e7526ab1c" exitCode=0 Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.339802 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52j5t" event={"ID":"a5cf4dd5-04ec-4340-a34e-35bd4029454f","Type":"ContainerDied","Data":"50976fe1594b19036cd5222db6b5907073fcd5c1d9a5248240dccd7e7526ab1c"} Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.339904 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52j5t" event={"ID":"a5cf4dd5-04ec-4340-a34e-35bd4029454f","Type":"ContainerDied","Data":"357a89172c4a3486eb13735294a3ed10cbf9bf7759121c10f6cded89db31380e"} Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.339850 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52j5t" Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.340206 4832 scope.go:117] "RemoveContainer" containerID="50976fe1594b19036cd5222db6b5907073fcd5c1d9a5248240dccd7e7526ab1c" Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.343688 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qkfh" event={"ID":"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4","Type":"ContainerStarted","Data":"6892840ad30f9c0ab0eb721921af30ec68497e270baf3dff00b033cad0c53e9a"} Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.384881 4832 scope.go:117] "RemoveContainer" containerID="b0882ca91d0dcf0b684635b95c61112d3fe7a008aef10583865292db6520d828" Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.413638 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52j5t"] Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.416768 4832 scope.go:117] "RemoveContainer" containerID="92d45bdf052ea69c2ea0f616513d3b80f46f9214df9aa74fe6b077c9b2f6c4d1" Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.426860 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-52j5t"] Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.507714 4832 scope.go:117] "RemoveContainer" containerID="50976fe1594b19036cd5222db6b5907073fcd5c1d9a5248240dccd7e7526ab1c" Oct 02 19:06:42 crc kubenswrapper[4832]: E1002 19:06:42.508578 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50976fe1594b19036cd5222db6b5907073fcd5c1d9a5248240dccd7e7526ab1c\": container with ID starting with 50976fe1594b19036cd5222db6b5907073fcd5c1d9a5248240dccd7e7526ab1c not found: ID does not exist" containerID="50976fe1594b19036cd5222db6b5907073fcd5c1d9a5248240dccd7e7526ab1c" Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.508633 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50976fe1594b19036cd5222db6b5907073fcd5c1d9a5248240dccd7e7526ab1c"} err="failed to get container status \"50976fe1594b19036cd5222db6b5907073fcd5c1d9a5248240dccd7e7526ab1c\": rpc error: code = NotFound desc = could not find container \"50976fe1594b19036cd5222db6b5907073fcd5c1d9a5248240dccd7e7526ab1c\": container with ID starting with 50976fe1594b19036cd5222db6b5907073fcd5c1d9a5248240dccd7e7526ab1c not found: ID does not exist" Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.508665 4832 scope.go:117] "RemoveContainer" containerID="b0882ca91d0dcf0b684635b95c61112d3fe7a008aef10583865292db6520d828" Oct 02 19:06:42 crc kubenswrapper[4832]: E1002 19:06:42.509335 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0882ca91d0dcf0b684635b95c61112d3fe7a008aef10583865292db6520d828\": container with ID starting with b0882ca91d0dcf0b684635b95c61112d3fe7a008aef10583865292db6520d828 not found: ID does not exist" containerID="b0882ca91d0dcf0b684635b95c61112d3fe7a008aef10583865292db6520d828" Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.509468 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0882ca91d0dcf0b684635b95c61112d3fe7a008aef10583865292db6520d828"} err="failed to get container status \"b0882ca91d0dcf0b684635b95c61112d3fe7a008aef10583865292db6520d828\": rpc error: code = NotFound desc = could not find container \"b0882ca91d0dcf0b684635b95c61112d3fe7a008aef10583865292db6520d828\": container with ID starting with b0882ca91d0dcf0b684635b95c61112d3fe7a008aef10583865292db6520d828 not found: ID does not exist" Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.509571 4832 scope.go:117] "RemoveContainer" containerID="92d45bdf052ea69c2ea0f616513d3b80f46f9214df9aa74fe6b077c9b2f6c4d1" Oct 02 19:06:42 crc kubenswrapper[4832]: E1002 19:06:42.510020 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92d45bdf052ea69c2ea0f616513d3b80f46f9214df9aa74fe6b077c9b2f6c4d1\": container with ID starting with 92d45bdf052ea69c2ea0f616513d3b80f46f9214df9aa74fe6b077c9b2f6c4d1 not found: ID does not exist" containerID="92d45bdf052ea69c2ea0f616513d3b80f46f9214df9aa74fe6b077c9b2f6c4d1" Oct 02 19:06:42 crc kubenswrapper[4832]: I1002 19:06:42.510136 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92d45bdf052ea69c2ea0f616513d3b80f46f9214df9aa74fe6b077c9b2f6c4d1"} err="failed to get container status \"92d45bdf052ea69c2ea0f616513d3b80f46f9214df9aa74fe6b077c9b2f6c4d1\": rpc error: code = NotFound desc = could not find container \"92d45bdf052ea69c2ea0f616513d3b80f46f9214df9aa74fe6b077c9b2f6c4d1\": container with ID starting with 92d45bdf052ea69c2ea0f616513d3b80f46f9214df9aa74fe6b077c9b2f6c4d1 not found: ID does not exist" Oct 02 19:06:43 crc kubenswrapper[4832]: I1002 19:06:43.247387 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5cf4dd5-04ec-4340-a34e-35bd4029454f" path="/var/lib/kubelet/pods/a5cf4dd5-04ec-4340-a34e-35bd4029454f/volumes" Oct 02 19:06:43 crc kubenswrapper[4832]: I1002 19:06:43.357863 4832 generic.go:334] "Generic (PLEG): container finished" podID="f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" containerID="6892840ad30f9c0ab0eb721921af30ec68497e270baf3dff00b033cad0c53e9a" exitCode=0 Oct 02 19:06:43 crc kubenswrapper[4832]: I1002 19:06:43.357957 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qkfh" event={"ID":"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4","Type":"ContainerDied","Data":"6892840ad30f9c0ab0eb721921af30ec68497e270baf3dff00b033cad0c53e9a"} Oct 02 19:06:44 crc kubenswrapper[4832]: I1002 19:06:44.376318 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qkfh" event={"ID":"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4","Type":"ContainerStarted","Data":"b52edaf933a7924f23feca5cbac3b16f0277b206a0eb4d213db862cac0ea2277"} Oct 02 19:06:44 crc kubenswrapper[4832]: I1002 19:06:44.406909 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7qkfh" podStartSLOduration=2.8169287179999998 podStartE2EDuration="6.406884006s" podCreationTimestamp="2025-10-02 19:06:38 +0000 UTC" firstStartedPulling="2025-10-02 19:06:40.305989728 +0000 UTC m=+2757.275432640" lastFinishedPulling="2025-10-02 19:06:43.895945046 +0000 UTC m=+2760.865387928" observedRunningTime="2025-10-02 19:06:44.401084626 +0000 UTC m=+2761.370527498" watchObservedRunningTime="2025-10-02 19:06:44.406884006 +0000 UTC m=+2761.376326888" Oct 02 19:06:48 crc kubenswrapper[4832]: I1002 19:06:48.654058 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:48 crc kubenswrapper[4832]: I1002 19:06:48.654774 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:48 crc kubenswrapper[4832]: I1002 19:06:48.717805 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:49 crc kubenswrapper[4832]: I1002 19:06:49.506186 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:49 crc kubenswrapper[4832]: I1002 19:06:49.564493 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7qkfh"] Oct 02 19:06:51 crc kubenswrapper[4832]: I1002 19:06:51.474477 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7qkfh" podUID="f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" containerName="registry-server" containerID="cri-o://b52edaf933a7924f23feca5cbac3b16f0277b206a0eb4d213db862cac0ea2277" gracePeriod=2 Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.030652 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.151835 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-catalog-content\") pod \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\" (UID: \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\") " Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.152107 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-utilities\") pod \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\" (UID: \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\") " Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.152133 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgpmt\" (UniqueName: \"kubernetes.io/projected/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-kube-api-access-jgpmt\") pod \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\" (UID: \"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4\") " Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.152964 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-utilities" (OuterVolumeSpecName: "utilities") pod "f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" (UID: "f5caf7b7-6de5-4f52-8e3f-570ff50c59c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.153711 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.164395 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-kube-api-access-jgpmt" (OuterVolumeSpecName: "kube-api-access-jgpmt") pod "f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" (UID: "f5caf7b7-6de5-4f52-8e3f-570ff50c59c4"). InnerVolumeSpecName "kube-api-access-jgpmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.202914 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" (UID: "f5caf7b7-6de5-4f52-8e3f-570ff50c59c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.256767 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.256830 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgpmt\" (UniqueName: \"kubernetes.io/projected/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4-kube-api-access-jgpmt\") on node \"crc\" DevicePath \"\"" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.504742 4832 generic.go:334] "Generic (PLEG): container finished" podID="f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" containerID="b52edaf933a7924f23feca5cbac3b16f0277b206a0eb4d213db862cac0ea2277" exitCode=0 Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.504815 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qkfh" event={"ID":"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4","Type":"ContainerDied","Data":"b52edaf933a7924f23feca5cbac3b16f0277b206a0eb4d213db862cac0ea2277"} Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.504868 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qkfh" event={"ID":"f5caf7b7-6de5-4f52-8e3f-570ff50c59c4","Type":"ContainerDied","Data":"05e09ec9533e02b60c9783ba278d7bbf801c2a91b5a5a85102a6d8a4b3f60b9b"} Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.504902 4832 scope.go:117] "RemoveContainer" containerID="b52edaf933a7924f23feca5cbac3b16f0277b206a0eb4d213db862cac0ea2277" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.505445 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qkfh" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.550804 4832 scope.go:117] "RemoveContainer" containerID="6892840ad30f9c0ab0eb721921af30ec68497e270baf3dff00b033cad0c53e9a" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.560209 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7qkfh"] Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.569722 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7qkfh"] Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.583965 4832 scope.go:117] "RemoveContainer" containerID="ac95cffaad96edbdc1ec44dda83faff76031389d1eb3b78f23b7951f3d0c5982" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.660679 4832 scope.go:117] "RemoveContainer" containerID="b52edaf933a7924f23feca5cbac3b16f0277b206a0eb4d213db862cac0ea2277" Oct 02 19:06:52 crc kubenswrapper[4832]: E1002 19:06:52.661178 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b52edaf933a7924f23feca5cbac3b16f0277b206a0eb4d213db862cac0ea2277\": container with ID starting with b52edaf933a7924f23feca5cbac3b16f0277b206a0eb4d213db862cac0ea2277 not found: ID does not exist" containerID="b52edaf933a7924f23feca5cbac3b16f0277b206a0eb4d213db862cac0ea2277" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.661253 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52edaf933a7924f23feca5cbac3b16f0277b206a0eb4d213db862cac0ea2277"} err="failed to get container status \"b52edaf933a7924f23feca5cbac3b16f0277b206a0eb4d213db862cac0ea2277\": rpc error: code = NotFound desc = could not find container \"b52edaf933a7924f23feca5cbac3b16f0277b206a0eb4d213db862cac0ea2277\": container with ID starting with b52edaf933a7924f23feca5cbac3b16f0277b206a0eb4d213db862cac0ea2277 not found: ID does not exist" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.661343 4832 scope.go:117] "RemoveContainer" containerID="6892840ad30f9c0ab0eb721921af30ec68497e270baf3dff00b033cad0c53e9a" Oct 02 19:06:52 crc kubenswrapper[4832]: E1002 19:06:52.661843 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6892840ad30f9c0ab0eb721921af30ec68497e270baf3dff00b033cad0c53e9a\": container with ID starting with 6892840ad30f9c0ab0eb721921af30ec68497e270baf3dff00b033cad0c53e9a not found: ID does not exist" containerID="6892840ad30f9c0ab0eb721921af30ec68497e270baf3dff00b033cad0c53e9a" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.661870 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6892840ad30f9c0ab0eb721921af30ec68497e270baf3dff00b033cad0c53e9a"} err="failed to get container status \"6892840ad30f9c0ab0eb721921af30ec68497e270baf3dff00b033cad0c53e9a\": rpc error: code = NotFound desc = could not find container \"6892840ad30f9c0ab0eb721921af30ec68497e270baf3dff00b033cad0c53e9a\": container with ID starting with 6892840ad30f9c0ab0eb721921af30ec68497e270baf3dff00b033cad0c53e9a not found: ID does not exist" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.661887 4832 scope.go:117] "RemoveContainer" containerID="ac95cffaad96edbdc1ec44dda83faff76031389d1eb3b78f23b7951f3d0c5982" Oct 02 19:06:52 crc kubenswrapper[4832]: E1002 19:06:52.662359 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac95cffaad96edbdc1ec44dda83faff76031389d1eb3b78f23b7951f3d0c5982\": container with ID starting with ac95cffaad96edbdc1ec44dda83faff76031389d1eb3b78f23b7951f3d0c5982 not found: ID does not exist" containerID="ac95cffaad96edbdc1ec44dda83faff76031389d1eb3b78f23b7951f3d0c5982" Oct 02 19:06:52 crc kubenswrapper[4832]: I1002 19:06:52.662391 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac95cffaad96edbdc1ec44dda83faff76031389d1eb3b78f23b7951f3d0c5982"} err="failed to get container status \"ac95cffaad96edbdc1ec44dda83faff76031389d1eb3b78f23b7951f3d0c5982\": rpc error: code = NotFound desc = could not find container \"ac95cffaad96edbdc1ec44dda83faff76031389d1eb3b78f23b7951f3d0c5982\": container with ID starting with ac95cffaad96edbdc1ec44dda83faff76031389d1eb3b78f23b7951f3d0c5982 not found: ID does not exist" Oct 02 19:06:53 crc kubenswrapper[4832]: I1002 19:06:53.239283 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" path="/var/lib/kubelet/pods/f5caf7b7-6de5-4f52-8e3f-570ff50c59c4/volumes" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.081651 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-blf7z"] Oct 02 19:07:04 crc kubenswrapper[4832]: E1002 19:07:04.082738 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" containerName="extract-utilities" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.082753 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" containerName="extract-utilities" Oct 02 19:07:04 crc kubenswrapper[4832]: E1002 19:07:04.082784 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5cf4dd5-04ec-4340-a34e-35bd4029454f" containerName="extract-content" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.082790 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5cf4dd5-04ec-4340-a34e-35bd4029454f" containerName="extract-content" Oct 02 19:07:04 crc kubenswrapper[4832]: E1002 19:07:04.082817 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" containerName="registry-server" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.082823 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" containerName="registry-server" Oct 02 19:07:04 crc kubenswrapper[4832]: E1002 19:07:04.082835 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" containerName="extract-content" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.082841 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" containerName="extract-content" Oct 02 19:07:04 crc kubenswrapper[4832]: E1002 19:07:04.082852 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5cf4dd5-04ec-4340-a34e-35bd4029454f" containerName="extract-utilities" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.082858 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5cf4dd5-04ec-4340-a34e-35bd4029454f" containerName="extract-utilities" Oct 02 19:07:04 crc kubenswrapper[4832]: E1002 19:07:04.082869 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5cf4dd5-04ec-4340-a34e-35bd4029454f" containerName="registry-server" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.082875 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5cf4dd5-04ec-4340-a34e-35bd4029454f" containerName="registry-server" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.083127 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5caf7b7-6de5-4f52-8e3f-570ff50c59c4" containerName="registry-server" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.083150 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5cf4dd5-04ec-4340-a34e-35bd4029454f" containerName="registry-server" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.085017 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.091111 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blf7z"] Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.163540 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e63fd61-feea-4c2a-8394-296a83c1d582-utilities\") pod \"redhat-operators-blf7z\" (UID: \"2e63fd61-feea-4c2a-8394-296a83c1d582\") " pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.164849 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8245p\" (UniqueName: \"kubernetes.io/projected/2e63fd61-feea-4c2a-8394-296a83c1d582-kube-api-access-8245p\") pod \"redhat-operators-blf7z\" (UID: \"2e63fd61-feea-4c2a-8394-296a83c1d582\") " pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.164890 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e63fd61-feea-4c2a-8394-296a83c1d582-catalog-content\") pod \"redhat-operators-blf7z\" (UID: \"2e63fd61-feea-4c2a-8394-296a83c1d582\") " pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.268179 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8245p\" (UniqueName: \"kubernetes.io/projected/2e63fd61-feea-4c2a-8394-296a83c1d582-kube-api-access-8245p\") pod \"redhat-operators-blf7z\" (UID: \"2e63fd61-feea-4c2a-8394-296a83c1d582\") " pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.268228 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e63fd61-feea-4c2a-8394-296a83c1d582-catalog-content\") pod \"redhat-operators-blf7z\" (UID: \"2e63fd61-feea-4c2a-8394-296a83c1d582\") " pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.268327 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e63fd61-feea-4c2a-8394-296a83c1d582-utilities\") pod \"redhat-operators-blf7z\" (UID: \"2e63fd61-feea-4c2a-8394-296a83c1d582\") " pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.268997 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e63fd61-feea-4c2a-8394-296a83c1d582-catalog-content\") pod \"redhat-operators-blf7z\" (UID: \"2e63fd61-feea-4c2a-8394-296a83c1d582\") " pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.269013 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e63fd61-feea-4c2a-8394-296a83c1d582-utilities\") pod \"redhat-operators-blf7z\" (UID: \"2e63fd61-feea-4c2a-8394-296a83c1d582\") " pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.290857 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8245p\" (UniqueName: \"kubernetes.io/projected/2e63fd61-feea-4c2a-8394-296a83c1d582-kube-api-access-8245p\") pod \"redhat-operators-blf7z\" (UID: \"2e63fd61-feea-4c2a-8394-296a83c1d582\") " pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.423759 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:04 crc kubenswrapper[4832]: I1002 19:07:04.936876 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blf7z"] Oct 02 19:07:05 crc kubenswrapper[4832]: I1002 19:07:05.697573 4832 generic.go:334] "Generic (PLEG): container finished" podID="2e63fd61-feea-4c2a-8394-296a83c1d582" containerID="4dccd2b229ad6a4abcdc713f4a774eca86f7268816af7665158d21b7f95294ba" exitCode=0 Oct 02 19:07:05 crc kubenswrapper[4832]: I1002 19:07:05.697638 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blf7z" event={"ID":"2e63fd61-feea-4c2a-8394-296a83c1d582","Type":"ContainerDied","Data":"4dccd2b229ad6a4abcdc713f4a774eca86f7268816af7665158d21b7f95294ba"} Oct 02 19:07:05 crc kubenswrapper[4832]: I1002 19:07:05.697677 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blf7z" event={"ID":"2e63fd61-feea-4c2a-8394-296a83c1d582","Type":"ContainerStarted","Data":"d1d2c2527fc6d98889debc42f43e9a4dd290706bcd08a345e6219ce7a3fe15f9"} Oct 02 19:07:07 crc kubenswrapper[4832]: I1002 19:07:07.729066 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blf7z" event={"ID":"2e63fd61-feea-4c2a-8394-296a83c1d582","Type":"ContainerStarted","Data":"5abc0cfb923f179857c103ff2850bb6188781538918ee9953caae2b5b8c9e9eb"} Oct 02 19:07:12 crc kubenswrapper[4832]: I1002 19:07:12.792174 4832 generic.go:334] "Generic (PLEG): container finished" podID="2e63fd61-feea-4c2a-8394-296a83c1d582" containerID="5abc0cfb923f179857c103ff2850bb6188781538918ee9953caae2b5b8c9e9eb" exitCode=0 Oct 02 19:07:12 crc kubenswrapper[4832]: I1002 19:07:12.792228 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blf7z" event={"ID":"2e63fd61-feea-4c2a-8394-296a83c1d582","Type":"ContainerDied","Data":"5abc0cfb923f179857c103ff2850bb6188781538918ee9953caae2b5b8c9e9eb"} Oct 02 19:07:13 crc kubenswrapper[4832]: I1002 19:07:13.808919 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blf7z" event={"ID":"2e63fd61-feea-4c2a-8394-296a83c1d582","Type":"ContainerStarted","Data":"1b20fe9f86a8fd085575859fa9eb8d6691a5b473c66ac090976c3d87da964fc8"} Oct 02 19:07:13 crc kubenswrapper[4832]: I1002 19:07:13.853154 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-blf7z" podStartSLOduration=2.304993093 podStartE2EDuration="9.853134652s" podCreationTimestamp="2025-10-02 19:07:04 +0000 UTC" firstStartedPulling="2025-10-02 19:07:05.700190793 +0000 UTC m=+2782.669633675" lastFinishedPulling="2025-10-02 19:07:13.248332352 +0000 UTC m=+2790.217775234" observedRunningTime="2025-10-02 19:07:13.834558054 +0000 UTC m=+2790.804000956" watchObservedRunningTime="2025-10-02 19:07:13.853134652 +0000 UTC m=+2790.822577524" Oct 02 19:07:14 crc kubenswrapper[4832]: I1002 19:07:14.425365 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:14 crc kubenswrapper[4832]: I1002 19:07:14.425431 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:15 crc kubenswrapper[4832]: I1002 19:07:15.503002 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-blf7z" podUID="2e63fd61-feea-4c2a-8394-296a83c1d582" containerName="registry-server" probeResult="failure" output=< Oct 02 19:07:15 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 19:07:15 crc kubenswrapper[4832]: > Oct 02 19:07:24 crc kubenswrapper[4832]: I1002 19:07:24.527844 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:24 crc kubenswrapper[4832]: I1002 19:07:24.607316 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:24 crc kubenswrapper[4832]: I1002 19:07:24.779761 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-blf7z"] Oct 02 19:07:25 crc kubenswrapper[4832]: I1002 19:07:25.969007 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-blf7z" podUID="2e63fd61-feea-4c2a-8394-296a83c1d582" containerName="registry-server" containerID="cri-o://1b20fe9f86a8fd085575859fa9eb8d6691a5b473c66ac090976c3d87da964fc8" gracePeriod=2 Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.479511 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.597537 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e63fd61-feea-4c2a-8394-296a83c1d582-catalog-content\") pod \"2e63fd61-feea-4c2a-8394-296a83c1d582\" (UID: \"2e63fd61-feea-4c2a-8394-296a83c1d582\") " Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.598440 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8245p\" (UniqueName: \"kubernetes.io/projected/2e63fd61-feea-4c2a-8394-296a83c1d582-kube-api-access-8245p\") pod \"2e63fd61-feea-4c2a-8394-296a83c1d582\" (UID: \"2e63fd61-feea-4c2a-8394-296a83c1d582\") " Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.598491 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e63fd61-feea-4c2a-8394-296a83c1d582-utilities\") pod \"2e63fd61-feea-4c2a-8394-296a83c1d582\" (UID: \"2e63fd61-feea-4c2a-8394-296a83c1d582\") " Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.599087 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e63fd61-feea-4c2a-8394-296a83c1d582-utilities" (OuterVolumeSpecName: "utilities") pod "2e63fd61-feea-4c2a-8394-296a83c1d582" (UID: "2e63fd61-feea-4c2a-8394-296a83c1d582"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.606177 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e63fd61-feea-4c2a-8394-296a83c1d582-kube-api-access-8245p" (OuterVolumeSpecName: "kube-api-access-8245p") pod "2e63fd61-feea-4c2a-8394-296a83c1d582" (UID: "2e63fd61-feea-4c2a-8394-296a83c1d582"). InnerVolumeSpecName "kube-api-access-8245p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.696798 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e63fd61-feea-4c2a-8394-296a83c1d582-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e63fd61-feea-4c2a-8394-296a83c1d582" (UID: "2e63fd61-feea-4c2a-8394-296a83c1d582"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.701550 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8245p\" (UniqueName: \"kubernetes.io/projected/2e63fd61-feea-4c2a-8394-296a83c1d582-kube-api-access-8245p\") on node \"crc\" DevicePath \"\"" Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.701581 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e63fd61-feea-4c2a-8394-296a83c1d582-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.701610 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e63fd61-feea-4c2a-8394-296a83c1d582-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.984188 4832 generic.go:334] "Generic (PLEG): container finished" podID="2e63fd61-feea-4c2a-8394-296a83c1d582" containerID="1b20fe9f86a8fd085575859fa9eb8d6691a5b473c66ac090976c3d87da964fc8" exitCode=0 Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.984253 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blf7z" event={"ID":"2e63fd61-feea-4c2a-8394-296a83c1d582","Type":"ContainerDied","Data":"1b20fe9f86a8fd085575859fa9eb8d6691a5b473c66ac090976c3d87da964fc8"} Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.984297 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blf7z" Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.984325 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blf7z" event={"ID":"2e63fd61-feea-4c2a-8394-296a83c1d582","Type":"ContainerDied","Data":"d1d2c2527fc6d98889debc42f43e9a4dd290706bcd08a345e6219ce7a3fe15f9"} Oct 02 19:07:26 crc kubenswrapper[4832]: I1002 19:07:26.984343 4832 scope.go:117] "RemoveContainer" containerID="1b20fe9f86a8fd085575859fa9eb8d6691a5b473c66ac090976c3d87da964fc8" Oct 02 19:07:27 crc kubenswrapper[4832]: I1002 19:07:27.020058 4832 scope.go:117] "RemoveContainer" containerID="5abc0cfb923f179857c103ff2850bb6188781538918ee9953caae2b5b8c9e9eb" Oct 02 19:07:27 crc kubenswrapper[4832]: I1002 19:07:27.041743 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-blf7z"] Oct 02 19:07:27 crc kubenswrapper[4832]: I1002 19:07:27.054445 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-blf7z"] Oct 02 19:07:27 crc kubenswrapper[4832]: I1002 19:07:27.080400 4832 scope.go:117] "RemoveContainer" containerID="4dccd2b229ad6a4abcdc713f4a774eca86f7268816af7665158d21b7f95294ba" Oct 02 19:07:27 crc kubenswrapper[4832]: I1002 19:07:27.139625 4832 scope.go:117] "RemoveContainer" containerID="1b20fe9f86a8fd085575859fa9eb8d6691a5b473c66ac090976c3d87da964fc8" Oct 02 19:07:27 crc kubenswrapper[4832]: E1002 19:07:27.140611 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b20fe9f86a8fd085575859fa9eb8d6691a5b473c66ac090976c3d87da964fc8\": container with ID starting with 1b20fe9f86a8fd085575859fa9eb8d6691a5b473c66ac090976c3d87da964fc8 not found: ID does not exist" containerID="1b20fe9f86a8fd085575859fa9eb8d6691a5b473c66ac090976c3d87da964fc8" Oct 02 19:07:27 crc kubenswrapper[4832]: I1002 19:07:27.140737 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b20fe9f86a8fd085575859fa9eb8d6691a5b473c66ac090976c3d87da964fc8"} err="failed to get container status \"1b20fe9f86a8fd085575859fa9eb8d6691a5b473c66ac090976c3d87da964fc8\": rpc error: code = NotFound desc = could not find container \"1b20fe9f86a8fd085575859fa9eb8d6691a5b473c66ac090976c3d87da964fc8\": container with ID starting with 1b20fe9f86a8fd085575859fa9eb8d6691a5b473c66ac090976c3d87da964fc8 not found: ID does not exist" Oct 02 19:07:27 crc kubenswrapper[4832]: I1002 19:07:27.140859 4832 scope.go:117] "RemoveContainer" containerID="5abc0cfb923f179857c103ff2850bb6188781538918ee9953caae2b5b8c9e9eb" Oct 02 19:07:27 crc kubenswrapper[4832]: E1002 19:07:27.142111 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5abc0cfb923f179857c103ff2850bb6188781538918ee9953caae2b5b8c9e9eb\": container with ID starting with 5abc0cfb923f179857c103ff2850bb6188781538918ee9953caae2b5b8c9e9eb not found: ID does not exist" containerID="5abc0cfb923f179857c103ff2850bb6188781538918ee9953caae2b5b8c9e9eb" Oct 02 19:07:27 crc kubenswrapper[4832]: I1002 19:07:27.142177 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5abc0cfb923f179857c103ff2850bb6188781538918ee9953caae2b5b8c9e9eb"} err="failed to get container status \"5abc0cfb923f179857c103ff2850bb6188781538918ee9953caae2b5b8c9e9eb\": rpc error: code = NotFound desc = could not find container \"5abc0cfb923f179857c103ff2850bb6188781538918ee9953caae2b5b8c9e9eb\": container with ID starting with 5abc0cfb923f179857c103ff2850bb6188781538918ee9953caae2b5b8c9e9eb not found: ID does not exist" Oct 02 19:07:27 crc kubenswrapper[4832]: I1002 19:07:27.142212 4832 scope.go:117] "RemoveContainer" containerID="4dccd2b229ad6a4abcdc713f4a774eca86f7268816af7665158d21b7f95294ba" Oct 02 19:07:27 crc kubenswrapper[4832]: E1002 19:07:27.142956 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dccd2b229ad6a4abcdc713f4a774eca86f7268816af7665158d21b7f95294ba\": container with ID starting with 4dccd2b229ad6a4abcdc713f4a774eca86f7268816af7665158d21b7f95294ba not found: ID does not exist" containerID="4dccd2b229ad6a4abcdc713f4a774eca86f7268816af7665158d21b7f95294ba" Oct 02 19:07:27 crc kubenswrapper[4832]: I1002 19:07:27.143027 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dccd2b229ad6a4abcdc713f4a774eca86f7268816af7665158d21b7f95294ba"} err="failed to get container status \"4dccd2b229ad6a4abcdc713f4a774eca86f7268816af7665158d21b7f95294ba\": rpc error: code = NotFound desc = could not find container \"4dccd2b229ad6a4abcdc713f4a774eca86f7268816af7665158d21b7f95294ba\": container with ID starting with 4dccd2b229ad6a4abcdc713f4a774eca86f7268816af7665158d21b7f95294ba not found: ID does not exist" Oct 02 19:07:27 crc kubenswrapper[4832]: I1002 19:07:27.235783 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e63fd61-feea-4c2a-8394-296a83c1d582" path="/var/lib/kubelet/pods/2e63fd61-feea-4c2a-8394-296a83c1d582/volumes" Oct 02 19:08:26 crc kubenswrapper[4832]: I1002 19:08:26.875372 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:08:26 crc kubenswrapper[4832]: I1002 19:08:26.876100 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:08:56 crc kubenswrapper[4832]: I1002 19:08:56.876041 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:08:56 crc kubenswrapper[4832]: I1002 19:08:56.876713 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:09:26 crc kubenswrapper[4832]: I1002 19:09:26.875208 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:09:26 crc kubenswrapper[4832]: I1002 19:09:26.875934 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:09:26 crc kubenswrapper[4832]: I1002 19:09:26.876011 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 19:09:26 crc kubenswrapper[4832]: I1002 19:09:26.877090 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d262d6d86f361a7adda18b96b7965dcaf322b6f42ae0accdc6a02032608db0b"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 19:09:26 crc kubenswrapper[4832]: I1002 19:09:26.877163 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://0d262d6d86f361a7adda18b96b7965dcaf322b6f42ae0accdc6a02032608db0b" gracePeriod=600 Oct 02 19:09:27 crc kubenswrapper[4832]: I1002 19:09:27.692859 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="0d262d6d86f361a7adda18b96b7965dcaf322b6f42ae0accdc6a02032608db0b" exitCode=0 Oct 02 19:09:27 crc kubenswrapper[4832]: I1002 19:09:27.693068 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"0d262d6d86f361a7adda18b96b7965dcaf322b6f42ae0accdc6a02032608db0b"} Oct 02 19:09:27 crc kubenswrapper[4832]: I1002 19:09:27.693708 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785"} Oct 02 19:09:27 crc kubenswrapper[4832]: I1002 19:09:27.693753 4832 scope.go:117] "RemoveContainer" containerID="f697bdadd0bc11212eac9cb2dfcb78ae4ea2e23d82acb02dd8cb0e1e8da4db67" Oct 02 19:10:04 crc kubenswrapper[4832]: I1002 19:10:04.173375 4832 generic.go:334] "Generic (PLEG): container finished" podID="26e8352e-0e5b-4ee9-83f5-aa3323948a6d" containerID="62d17ccb2ca210d8115829617512fec877b4d3ea5ed74b74bd9e33b2318f8940" exitCode=0 Oct 02 19:10:04 crc kubenswrapper[4832]: I1002 19:10:04.173501 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" event={"ID":"26e8352e-0e5b-4ee9-83f5-aa3323948a6d","Type":"ContainerDied","Data":"62d17ccb2ca210d8115829617512fec877b4d3ea5ed74b74bd9e33b2318f8940"} Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.763166 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.905188 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-migration-ssh-key-0\") pod \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.905822 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-combined-ca-bundle\") pod \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.906128 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-ssh-key\") pod \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.906300 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-cell1-compute-config-0\") pod \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.906492 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-extra-config-0\") pod \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.906577 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qpq5\" (UniqueName: \"kubernetes.io/projected/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-kube-api-access-4qpq5\") pod \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.906654 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-cell1-compute-config-1\") pod \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.906734 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-inventory\") pod \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.907256 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-migration-ssh-key-1\") pod \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\" (UID: \"26e8352e-0e5b-4ee9-83f5-aa3323948a6d\") " Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.913780 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-kube-api-access-4qpq5" (OuterVolumeSpecName: "kube-api-access-4qpq5") pod "26e8352e-0e5b-4ee9-83f5-aa3323948a6d" (UID: "26e8352e-0e5b-4ee9-83f5-aa3323948a6d"). InnerVolumeSpecName "kube-api-access-4qpq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.916416 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "26e8352e-0e5b-4ee9-83f5-aa3323948a6d" (UID: "26e8352e-0e5b-4ee9-83f5-aa3323948a6d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.947923 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-inventory" (OuterVolumeSpecName: "inventory") pod "26e8352e-0e5b-4ee9-83f5-aa3323948a6d" (UID: "26e8352e-0e5b-4ee9-83f5-aa3323948a6d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.947907 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "26e8352e-0e5b-4ee9-83f5-aa3323948a6d" (UID: "26e8352e-0e5b-4ee9-83f5-aa3323948a6d"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.950713 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "26e8352e-0e5b-4ee9-83f5-aa3323948a6d" (UID: "26e8352e-0e5b-4ee9-83f5-aa3323948a6d"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.954049 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "26e8352e-0e5b-4ee9-83f5-aa3323948a6d" (UID: "26e8352e-0e5b-4ee9-83f5-aa3323948a6d"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.962826 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "26e8352e-0e5b-4ee9-83f5-aa3323948a6d" (UID: "26e8352e-0e5b-4ee9-83f5-aa3323948a6d"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.979011 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "26e8352e-0e5b-4ee9-83f5-aa3323948a6d" (UID: "26e8352e-0e5b-4ee9-83f5-aa3323948a6d"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:10:05 crc kubenswrapper[4832]: I1002 19:10:05.990461 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "26e8352e-0e5b-4ee9-83f5-aa3323948a6d" (UID: "26e8352e-0e5b-4ee9-83f5-aa3323948a6d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.011037 4832 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.011070 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.011080 4832 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.011090 4832 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.011098 4832 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.011107 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.011115 4832 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.011133 4832 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.011141 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qpq5\" (UniqueName: \"kubernetes.io/projected/26e8352e-0e5b-4ee9-83f5-aa3323948a6d-kube-api-access-4qpq5\") on node \"crc\" DevicePath \"\"" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.200815 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" event={"ID":"26e8352e-0e5b-4ee9-83f5-aa3323948a6d","Type":"ContainerDied","Data":"8633ad4109f692ee80e4249d396a7421df83caa11896ddffee2337b9663e4470"} Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.201164 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8633ad4109f692ee80e4249d396a7421df83caa11896ddffee2337b9663e4470" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.200916 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l5mp2" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.362666 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4"] Oct 02 19:10:06 crc kubenswrapper[4832]: E1002 19:10:06.363357 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e63fd61-feea-4c2a-8394-296a83c1d582" containerName="extract-content" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.363379 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e63fd61-feea-4c2a-8394-296a83c1d582" containerName="extract-content" Oct 02 19:10:06 crc kubenswrapper[4832]: E1002 19:10:06.363407 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e63fd61-feea-4c2a-8394-296a83c1d582" containerName="registry-server" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.363415 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e63fd61-feea-4c2a-8394-296a83c1d582" containerName="registry-server" Oct 02 19:10:06 crc kubenswrapper[4832]: E1002 19:10:06.363425 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e63fd61-feea-4c2a-8394-296a83c1d582" containerName="extract-utilities" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.363435 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e63fd61-feea-4c2a-8394-296a83c1d582" containerName="extract-utilities" Oct 02 19:10:06 crc kubenswrapper[4832]: E1002 19:10:06.363466 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e8352e-0e5b-4ee9-83f5-aa3323948a6d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.363476 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e8352e-0e5b-4ee9-83f5-aa3323948a6d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.363752 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e8352e-0e5b-4ee9-83f5-aa3323948a6d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.363797 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e63fd61-feea-4c2a-8394-296a83c1d582" containerName="registry-server" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.364819 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.367199 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.367650 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.367902 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.368084 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.368290 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.397110 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4"] Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.524572 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kmbg\" (UniqueName: \"kubernetes.io/projected/29281442-d5d6-4c9c-b24d-82c29d04990e-kube-api-access-2kmbg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.524642 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.524803 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.524987 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.525044 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.525151 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.525226 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.627434 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.627507 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.627613 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.627657 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.627709 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kmbg\" (UniqueName: \"kubernetes.io/projected/29281442-d5d6-4c9c-b24d-82c29d04990e-kube-api-access-2kmbg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.627743 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.627788 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.632655 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.632767 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.634208 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.634673 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.635373 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.646112 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.646456 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kmbg\" (UniqueName: \"kubernetes.io/projected/29281442-d5d6-4c9c-b24d-82c29d04990e-kube-api-access-2kmbg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:06 crc kubenswrapper[4832]: I1002 19:10:06.692984 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:10:07 crc kubenswrapper[4832]: I1002 19:10:07.277726 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4"] Oct 02 19:10:08 crc kubenswrapper[4832]: I1002 19:10:08.229583 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" event={"ID":"29281442-d5d6-4c9c-b24d-82c29d04990e","Type":"ContainerStarted","Data":"9c951436cac0aa45bfae71cc083ac4003daaa0e1a9ea4778f1492294c9a17b2d"} Oct 02 19:10:08 crc kubenswrapper[4832]: I1002 19:10:08.230001 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" event={"ID":"29281442-d5d6-4c9c-b24d-82c29d04990e","Type":"ContainerStarted","Data":"6de267f292c9863380208b011b45376bc81935fc62f0de97e053739a20f663a7"} Oct 02 19:10:08 crc kubenswrapper[4832]: I1002 19:10:08.271250 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" podStartSLOduration=1.858852815 podStartE2EDuration="2.271233102s" podCreationTimestamp="2025-10-02 19:10:06 +0000 UTC" firstStartedPulling="2025-10-02 19:10:07.272410246 +0000 UTC m=+2964.241853138" lastFinishedPulling="2025-10-02 19:10:07.684790553 +0000 UTC m=+2964.654233425" observedRunningTime="2025-10-02 19:10:08.255693539 +0000 UTC m=+2965.225136421" watchObservedRunningTime="2025-10-02 19:10:08.271233102 +0000 UTC m=+2965.240675974" Oct 02 19:11:56 crc kubenswrapper[4832]: I1002 19:11:56.876225 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:11:56 crc kubenswrapper[4832]: I1002 19:11:56.878679 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:12:26 crc kubenswrapper[4832]: I1002 19:12:26.875507 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:12:26 crc kubenswrapper[4832]: I1002 19:12:26.875976 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:12:41 crc kubenswrapper[4832]: I1002 19:12:41.262590 4832 generic.go:334] "Generic (PLEG): container finished" podID="29281442-d5d6-4c9c-b24d-82c29d04990e" containerID="9c951436cac0aa45bfae71cc083ac4003daaa0e1a9ea4778f1492294c9a17b2d" exitCode=0 Oct 02 19:12:41 crc kubenswrapper[4832]: I1002 19:12:41.262763 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" event={"ID":"29281442-d5d6-4c9c-b24d-82c29d04990e","Type":"ContainerDied","Data":"9c951436cac0aa45bfae71cc083ac4003daaa0e1a9ea4778f1492294c9a17b2d"} Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.748243 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.854948 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-inventory\") pod \"29281442-d5d6-4c9c-b24d-82c29d04990e\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.855129 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-2\") pod \"29281442-d5d6-4c9c-b24d-82c29d04990e\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.855174 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-1\") pod \"29281442-d5d6-4c9c-b24d-82c29d04990e\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.855266 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-telemetry-combined-ca-bundle\") pod \"29281442-d5d6-4c9c-b24d-82c29d04990e\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.855351 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ssh-key\") pod \"29281442-d5d6-4c9c-b24d-82c29d04990e\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.855427 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kmbg\" (UniqueName: \"kubernetes.io/projected/29281442-d5d6-4c9c-b24d-82c29d04990e-kube-api-access-2kmbg\") pod \"29281442-d5d6-4c9c-b24d-82c29d04990e\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.855528 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-0\") pod \"29281442-d5d6-4c9c-b24d-82c29d04990e\" (UID: \"29281442-d5d6-4c9c-b24d-82c29d04990e\") " Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.861063 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "29281442-d5d6-4c9c-b24d-82c29d04990e" (UID: "29281442-d5d6-4c9c-b24d-82c29d04990e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.861097 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29281442-d5d6-4c9c-b24d-82c29d04990e-kube-api-access-2kmbg" (OuterVolumeSpecName: "kube-api-access-2kmbg") pod "29281442-d5d6-4c9c-b24d-82c29d04990e" (UID: "29281442-d5d6-4c9c-b24d-82c29d04990e"). InnerVolumeSpecName "kube-api-access-2kmbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.886548 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "29281442-d5d6-4c9c-b24d-82c29d04990e" (UID: "29281442-d5d6-4c9c-b24d-82c29d04990e"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.887038 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "29281442-d5d6-4c9c-b24d-82c29d04990e" (UID: "29281442-d5d6-4c9c-b24d-82c29d04990e"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.888595 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-inventory" (OuterVolumeSpecName: "inventory") pod "29281442-d5d6-4c9c-b24d-82c29d04990e" (UID: "29281442-d5d6-4c9c-b24d-82c29d04990e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.895818 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "29281442-d5d6-4c9c-b24d-82c29d04990e" (UID: "29281442-d5d6-4c9c-b24d-82c29d04990e"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.911421 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "29281442-d5d6-4c9c-b24d-82c29d04990e" (UID: "29281442-d5d6-4c9c-b24d-82c29d04990e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.958409 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.958453 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.958471 4832 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.958487 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.958503 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kmbg\" (UniqueName: \"kubernetes.io/projected/29281442-d5d6-4c9c-b24d-82c29d04990e-kube-api-access-2kmbg\") on node \"crc\" DevicePath \"\"" Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.958515 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 02 19:12:42 crc kubenswrapper[4832]: I1002 19:12:42.958531 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29281442-d5d6-4c9c-b24d-82c29d04990e-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.294249 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" event={"ID":"29281442-d5d6-4c9c-b24d-82c29d04990e","Type":"ContainerDied","Data":"6de267f292c9863380208b011b45376bc81935fc62f0de97e053739a20f663a7"} Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.294343 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6de267f292c9863380208b011b45376bc81935fc62f0de97e053739a20f663a7" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.294372 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.426879 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg"] Oct 02 19:12:43 crc kubenswrapper[4832]: E1002 19:12:43.427737 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29281442-d5d6-4c9c-b24d-82c29d04990e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.427760 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="29281442-d5d6-4c9c-b24d-82c29d04990e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.428051 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="29281442-d5d6-4c9c-b24d-82c29d04990e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.429049 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.431224 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.431793 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.432357 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.437064 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.437586 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.461568 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg"] Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.575876 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.576031 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.576313 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.576366 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.576434 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w65tm\" (UniqueName: \"kubernetes.io/projected/0afc2c94-7e28-4344-b4be-807607a5c0e4-kube-api-access-w65tm\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.576493 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.576546 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.679107 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w65tm\" (UniqueName: \"kubernetes.io/projected/0afc2c94-7e28-4344-b4be-807607a5c0e4-kube-api-access-w65tm\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.679549 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.679757 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.679992 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.680243 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.680692 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.680903 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.685798 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.686856 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.688337 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.688403 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.699770 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.699784 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.707982 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w65tm\" (UniqueName: \"kubernetes.io/projected/0afc2c94-7e28-4344-b4be-807607a5c0e4-kube-api-access-w65tm\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:43 crc kubenswrapper[4832]: I1002 19:12:43.751674 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:12:44 crc kubenswrapper[4832]: I1002 19:12:44.364506 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg"] Oct 02 19:12:44 crc kubenswrapper[4832]: I1002 19:12:44.382790 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 19:12:45 crc kubenswrapper[4832]: I1002 19:12:45.320464 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" event={"ID":"0afc2c94-7e28-4344-b4be-807607a5c0e4","Type":"ContainerStarted","Data":"1d44d9f7de54d503c50ca55b5d9affc89d047fb332c8b3b6cd64ced38f0bdc67"} Oct 02 19:12:45 crc kubenswrapper[4832]: I1002 19:12:45.321063 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" event={"ID":"0afc2c94-7e28-4344-b4be-807607a5c0e4","Type":"ContainerStarted","Data":"b0e4d43ec427dd4d4d1f9308c79d2c02c86a2c8a4cc6bcbce62694feaed02c7f"} Oct 02 19:12:45 crc kubenswrapper[4832]: I1002 19:12:45.352360 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" podStartSLOduration=1.792427808 podStartE2EDuration="2.352339003s" podCreationTimestamp="2025-10-02 19:12:43 +0000 UTC" firstStartedPulling="2025-10-02 19:12:44.382457087 +0000 UTC m=+3121.351899969" lastFinishedPulling="2025-10-02 19:12:44.942368252 +0000 UTC m=+3121.911811164" observedRunningTime="2025-10-02 19:12:45.342403074 +0000 UTC m=+3122.311845956" watchObservedRunningTime="2025-10-02 19:12:45.352339003 +0000 UTC m=+3122.321781885" Oct 02 19:12:56 crc kubenswrapper[4832]: I1002 19:12:56.875998 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:12:56 crc kubenswrapper[4832]: I1002 19:12:56.876807 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:12:56 crc kubenswrapper[4832]: I1002 19:12:56.876894 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 19:12:56 crc kubenswrapper[4832]: I1002 19:12:56.878532 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 19:12:56 crc kubenswrapper[4832]: I1002 19:12:56.878673 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" gracePeriod=600 Oct 02 19:12:57 crc kubenswrapper[4832]: E1002 19:12:57.020710 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:12:57 crc kubenswrapper[4832]: I1002 19:12:57.489353 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" exitCode=0 Oct 02 19:12:57 crc kubenswrapper[4832]: I1002 19:12:57.489413 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785"} Oct 02 19:12:57 crc kubenswrapper[4832]: I1002 19:12:57.489463 4832 scope.go:117] "RemoveContainer" containerID="0d262d6d86f361a7adda18b96b7965dcaf322b6f42ae0accdc6a02032608db0b" Oct 02 19:12:57 crc kubenswrapper[4832]: I1002 19:12:57.490841 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:12:57 crc kubenswrapper[4832]: E1002 19:12:57.491657 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:13:06 crc kubenswrapper[4832]: I1002 19:13:06.743816 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2svks"] Oct 02 19:13:06 crc kubenswrapper[4832]: I1002 19:13:06.749872 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:06 crc kubenswrapper[4832]: I1002 19:13:06.758383 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2svks"] Oct 02 19:13:06 crc kubenswrapper[4832]: I1002 19:13:06.920746 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwtrv\" (UniqueName: \"kubernetes.io/projected/b37f5aa5-f979-45ec-ab84-5adfa1172472-kube-api-access-hwtrv\") pod \"redhat-marketplace-2svks\" (UID: \"b37f5aa5-f979-45ec-ab84-5adfa1172472\") " pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:06 crc kubenswrapper[4832]: I1002 19:13:06.920815 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37f5aa5-f979-45ec-ab84-5adfa1172472-utilities\") pod \"redhat-marketplace-2svks\" (UID: \"b37f5aa5-f979-45ec-ab84-5adfa1172472\") " pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:06 crc kubenswrapper[4832]: I1002 19:13:06.921055 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37f5aa5-f979-45ec-ab84-5adfa1172472-catalog-content\") pod \"redhat-marketplace-2svks\" (UID: \"b37f5aa5-f979-45ec-ab84-5adfa1172472\") " pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:07 crc kubenswrapper[4832]: I1002 19:13:07.023568 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwtrv\" (UniqueName: \"kubernetes.io/projected/b37f5aa5-f979-45ec-ab84-5adfa1172472-kube-api-access-hwtrv\") pod \"redhat-marketplace-2svks\" (UID: \"b37f5aa5-f979-45ec-ab84-5adfa1172472\") " pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:07 crc kubenswrapper[4832]: I1002 19:13:07.023676 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37f5aa5-f979-45ec-ab84-5adfa1172472-utilities\") pod \"redhat-marketplace-2svks\" (UID: \"b37f5aa5-f979-45ec-ab84-5adfa1172472\") " pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:07 crc kubenswrapper[4832]: I1002 19:13:07.023966 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37f5aa5-f979-45ec-ab84-5adfa1172472-catalog-content\") pod \"redhat-marketplace-2svks\" (UID: \"b37f5aa5-f979-45ec-ab84-5adfa1172472\") " pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:07 crc kubenswrapper[4832]: I1002 19:13:07.024692 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37f5aa5-f979-45ec-ab84-5adfa1172472-utilities\") pod \"redhat-marketplace-2svks\" (UID: \"b37f5aa5-f979-45ec-ab84-5adfa1172472\") " pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:07 crc kubenswrapper[4832]: I1002 19:13:07.024787 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37f5aa5-f979-45ec-ab84-5adfa1172472-catalog-content\") pod \"redhat-marketplace-2svks\" (UID: \"b37f5aa5-f979-45ec-ab84-5adfa1172472\") " pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:07 crc kubenswrapper[4832]: I1002 19:13:07.054221 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwtrv\" (UniqueName: \"kubernetes.io/projected/b37f5aa5-f979-45ec-ab84-5adfa1172472-kube-api-access-hwtrv\") pod \"redhat-marketplace-2svks\" (UID: \"b37f5aa5-f979-45ec-ab84-5adfa1172472\") " pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:07 crc kubenswrapper[4832]: I1002 19:13:07.087938 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:07 crc kubenswrapper[4832]: I1002 19:13:07.634189 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2svks"] Oct 02 19:13:08 crc kubenswrapper[4832]: I1002 19:13:08.642789 4832 generic.go:334] "Generic (PLEG): container finished" podID="b37f5aa5-f979-45ec-ab84-5adfa1172472" containerID="8d523afe09c23f4d51d3437579ffe83368b171659610046886190883507177f0" exitCode=0 Oct 02 19:13:08 crc kubenswrapper[4832]: I1002 19:13:08.642861 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2svks" event={"ID":"b37f5aa5-f979-45ec-ab84-5adfa1172472","Type":"ContainerDied","Data":"8d523afe09c23f4d51d3437579ffe83368b171659610046886190883507177f0"} Oct 02 19:13:08 crc kubenswrapper[4832]: I1002 19:13:08.643109 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2svks" event={"ID":"b37f5aa5-f979-45ec-ab84-5adfa1172472","Type":"ContainerStarted","Data":"cff9df87cda104241ffb32d391ef646b2bf714c3d08495307ae3ddcb906546b7"} Oct 02 19:13:09 crc kubenswrapper[4832]: I1002 19:13:09.657524 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2svks" event={"ID":"b37f5aa5-f979-45ec-ab84-5adfa1172472","Type":"ContainerStarted","Data":"88297e8a723689fdeca12b6a445f059a6087e47c5764f7263605aacc5928e4ed"} Oct 02 19:13:10 crc kubenswrapper[4832]: I1002 19:13:10.675058 4832 generic.go:334] "Generic (PLEG): container finished" podID="b37f5aa5-f979-45ec-ab84-5adfa1172472" containerID="88297e8a723689fdeca12b6a445f059a6087e47c5764f7263605aacc5928e4ed" exitCode=0 Oct 02 19:13:10 crc kubenswrapper[4832]: I1002 19:13:10.675125 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2svks" event={"ID":"b37f5aa5-f979-45ec-ab84-5adfa1172472","Type":"ContainerDied","Data":"88297e8a723689fdeca12b6a445f059a6087e47c5764f7263605aacc5928e4ed"} Oct 02 19:13:11 crc kubenswrapper[4832]: I1002 19:13:11.223112 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:13:11 crc kubenswrapper[4832]: E1002 19:13:11.223701 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:13:13 crc kubenswrapper[4832]: I1002 19:13:13.718683 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2svks" event={"ID":"b37f5aa5-f979-45ec-ab84-5adfa1172472","Type":"ContainerStarted","Data":"969a89f07a6a8e2675a61c537207f3e9caad8f77337c5f05f1239a7fab55a409"} Oct 02 19:13:17 crc kubenswrapper[4832]: I1002 19:13:17.088934 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:17 crc kubenswrapper[4832]: I1002 19:13:17.089623 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:17 crc kubenswrapper[4832]: I1002 19:13:17.182107 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:17 crc kubenswrapper[4832]: I1002 19:13:17.220643 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2svks" podStartSLOduration=7.375774558 podStartE2EDuration="11.220620782s" podCreationTimestamp="2025-10-02 19:13:06 +0000 UTC" firstStartedPulling="2025-10-02 19:13:08.645518992 +0000 UTC m=+3145.614961904" lastFinishedPulling="2025-10-02 19:13:12.490365246 +0000 UTC m=+3149.459808128" observedRunningTime="2025-10-02 19:13:13.747866538 +0000 UTC m=+3150.717309420" watchObservedRunningTime="2025-10-02 19:13:17.220620782 +0000 UTC m=+3154.190063694" Oct 02 19:13:17 crc kubenswrapper[4832]: I1002 19:13:17.820409 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:23 crc kubenswrapper[4832]: I1002 19:13:23.327183 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2svks"] Oct 02 19:13:23 crc kubenswrapper[4832]: I1002 19:13:23.328323 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2svks" podUID="b37f5aa5-f979-45ec-ab84-5adfa1172472" containerName="registry-server" containerID="cri-o://969a89f07a6a8e2675a61c537207f3e9caad8f77337c5f05f1239a7fab55a409" gracePeriod=2 Oct 02 19:13:23 crc kubenswrapper[4832]: I1002 19:13:23.846597 4832 generic.go:334] "Generic (PLEG): container finished" podID="b37f5aa5-f979-45ec-ab84-5adfa1172472" containerID="969a89f07a6a8e2675a61c537207f3e9caad8f77337c5f05f1239a7fab55a409" exitCode=0 Oct 02 19:13:23 crc kubenswrapper[4832]: I1002 19:13:23.846813 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2svks" event={"ID":"b37f5aa5-f979-45ec-ab84-5adfa1172472","Type":"ContainerDied","Data":"969a89f07a6a8e2675a61c537207f3e9caad8f77337c5f05f1239a7fab55a409"} Oct 02 19:13:23 crc kubenswrapper[4832]: I1002 19:13:23.846907 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2svks" event={"ID":"b37f5aa5-f979-45ec-ab84-5adfa1172472","Type":"ContainerDied","Data":"cff9df87cda104241ffb32d391ef646b2bf714c3d08495307ae3ddcb906546b7"} Oct 02 19:13:23 crc kubenswrapper[4832]: I1002 19:13:23.846923 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cff9df87cda104241ffb32d391ef646b2bf714c3d08495307ae3ddcb906546b7" Oct 02 19:13:23 crc kubenswrapper[4832]: I1002 19:13:23.937667 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:24 crc kubenswrapper[4832]: I1002 19:13:24.033716 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwtrv\" (UniqueName: \"kubernetes.io/projected/b37f5aa5-f979-45ec-ab84-5adfa1172472-kube-api-access-hwtrv\") pod \"b37f5aa5-f979-45ec-ab84-5adfa1172472\" (UID: \"b37f5aa5-f979-45ec-ab84-5adfa1172472\") " Oct 02 19:13:24 crc kubenswrapper[4832]: I1002 19:13:24.033888 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37f5aa5-f979-45ec-ab84-5adfa1172472-catalog-content\") pod \"b37f5aa5-f979-45ec-ab84-5adfa1172472\" (UID: \"b37f5aa5-f979-45ec-ab84-5adfa1172472\") " Oct 02 19:13:24 crc kubenswrapper[4832]: I1002 19:13:24.033998 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37f5aa5-f979-45ec-ab84-5adfa1172472-utilities\") pod \"b37f5aa5-f979-45ec-ab84-5adfa1172472\" (UID: \"b37f5aa5-f979-45ec-ab84-5adfa1172472\") " Oct 02 19:13:24 crc kubenswrapper[4832]: I1002 19:13:24.034869 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b37f5aa5-f979-45ec-ab84-5adfa1172472-utilities" (OuterVolumeSpecName: "utilities") pod "b37f5aa5-f979-45ec-ab84-5adfa1172472" (UID: "b37f5aa5-f979-45ec-ab84-5adfa1172472"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:13:24 crc kubenswrapper[4832]: I1002 19:13:24.040344 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37f5aa5-f979-45ec-ab84-5adfa1172472-kube-api-access-hwtrv" (OuterVolumeSpecName: "kube-api-access-hwtrv") pod "b37f5aa5-f979-45ec-ab84-5adfa1172472" (UID: "b37f5aa5-f979-45ec-ab84-5adfa1172472"). InnerVolumeSpecName "kube-api-access-hwtrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:13:24 crc kubenswrapper[4832]: I1002 19:13:24.061257 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b37f5aa5-f979-45ec-ab84-5adfa1172472-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b37f5aa5-f979-45ec-ab84-5adfa1172472" (UID: "b37f5aa5-f979-45ec-ab84-5adfa1172472"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:13:24 crc kubenswrapper[4832]: I1002 19:13:24.137350 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwtrv\" (UniqueName: \"kubernetes.io/projected/b37f5aa5-f979-45ec-ab84-5adfa1172472-kube-api-access-hwtrv\") on node \"crc\" DevicePath \"\"" Oct 02 19:13:24 crc kubenswrapper[4832]: I1002 19:13:24.137383 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b37f5aa5-f979-45ec-ab84-5adfa1172472-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:13:24 crc kubenswrapper[4832]: I1002 19:13:24.137395 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b37f5aa5-f979-45ec-ab84-5adfa1172472-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:13:24 crc kubenswrapper[4832]: I1002 19:13:24.856828 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2svks" Oct 02 19:13:24 crc kubenswrapper[4832]: I1002 19:13:24.892195 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2svks"] Oct 02 19:13:24 crc kubenswrapper[4832]: I1002 19:13:24.904547 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2svks"] Oct 02 19:13:25 crc kubenswrapper[4832]: I1002 19:13:25.235934 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37f5aa5-f979-45ec-ab84-5adfa1172472" path="/var/lib/kubelet/pods/b37f5aa5-f979-45ec-ab84-5adfa1172472/volumes" Oct 02 19:13:26 crc kubenswrapper[4832]: I1002 19:13:26.223083 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:13:26 crc kubenswrapper[4832]: E1002 19:13:26.223975 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:13:40 crc kubenswrapper[4832]: I1002 19:13:40.222617 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:13:40 crc kubenswrapper[4832]: E1002 19:13:40.223663 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:13:55 crc kubenswrapper[4832]: I1002 19:13:55.239992 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:13:55 crc kubenswrapper[4832]: E1002 19:13:55.242798 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:14:10 crc kubenswrapper[4832]: I1002 19:14:10.223072 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:14:10 crc kubenswrapper[4832]: E1002 19:14:10.224222 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:14:25 crc kubenswrapper[4832]: I1002 19:14:25.246490 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:14:25 crc kubenswrapper[4832]: E1002 19:14:25.248442 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:14:38 crc kubenswrapper[4832]: I1002 19:14:38.223708 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:14:38 crc kubenswrapper[4832]: E1002 19:14:38.224800 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:14:52 crc kubenswrapper[4832]: I1002 19:14:52.223155 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:14:52 crc kubenswrapper[4832]: E1002 19:14:52.224015 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.195218 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6"] Oct 02 19:15:00 crc kubenswrapper[4832]: E1002 19:15:00.196310 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37f5aa5-f979-45ec-ab84-5adfa1172472" containerName="registry-server" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.196325 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37f5aa5-f979-45ec-ab84-5adfa1172472" containerName="registry-server" Oct 02 19:15:00 crc kubenswrapper[4832]: E1002 19:15:00.196357 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37f5aa5-f979-45ec-ab84-5adfa1172472" containerName="extract-utilities" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.196363 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37f5aa5-f979-45ec-ab84-5adfa1172472" containerName="extract-utilities" Oct 02 19:15:00 crc kubenswrapper[4832]: E1002 19:15:00.196397 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37f5aa5-f979-45ec-ab84-5adfa1172472" containerName="extract-content" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.196403 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37f5aa5-f979-45ec-ab84-5adfa1172472" containerName="extract-content" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.196630 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37f5aa5-f979-45ec-ab84-5adfa1172472" containerName="registry-server" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.197592 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.199366 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.200324 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.212917 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6"] Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.327860 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80f707fb-5732-4d23-be51-ba82116f8e1e-secret-volume\") pod \"collect-profiles-29323875-8mtr6\" (UID: \"80f707fb-5732-4d23-be51-ba82116f8e1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.327988 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80f707fb-5732-4d23-be51-ba82116f8e1e-config-volume\") pod \"collect-profiles-29323875-8mtr6\" (UID: \"80f707fb-5732-4d23-be51-ba82116f8e1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.328406 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86264\" (UniqueName: \"kubernetes.io/projected/80f707fb-5732-4d23-be51-ba82116f8e1e-kube-api-access-86264\") pod \"collect-profiles-29323875-8mtr6\" (UID: \"80f707fb-5732-4d23-be51-ba82116f8e1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.430884 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80f707fb-5732-4d23-be51-ba82116f8e1e-secret-volume\") pod \"collect-profiles-29323875-8mtr6\" (UID: \"80f707fb-5732-4d23-be51-ba82116f8e1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.431016 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80f707fb-5732-4d23-be51-ba82116f8e1e-config-volume\") pod \"collect-profiles-29323875-8mtr6\" (UID: \"80f707fb-5732-4d23-be51-ba82116f8e1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.431075 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86264\" (UniqueName: \"kubernetes.io/projected/80f707fb-5732-4d23-be51-ba82116f8e1e-kube-api-access-86264\") pod \"collect-profiles-29323875-8mtr6\" (UID: \"80f707fb-5732-4d23-be51-ba82116f8e1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.431885 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80f707fb-5732-4d23-be51-ba82116f8e1e-config-volume\") pod \"collect-profiles-29323875-8mtr6\" (UID: \"80f707fb-5732-4d23-be51-ba82116f8e1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.436773 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80f707fb-5732-4d23-be51-ba82116f8e1e-secret-volume\") pod \"collect-profiles-29323875-8mtr6\" (UID: \"80f707fb-5732-4d23-be51-ba82116f8e1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.468892 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86264\" (UniqueName: \"kubernetes.io/projected/80f707fb-5732-4d23-be51-ba82116f8e1e-kube-api-access-86264\") pod \"collect-profiles-29323875-8mtr6\" (UID: \"80f707fb-5732-4d23-be51-ba82116f8e1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" Oct 02 19:15:00 crc kubenswrapper[4832]: I1002 19:15:00.527726 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" Oct 02 19:15:01 crc kubenswrapper[4832]: W1002 19:15:01.036167 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80f707fb_5732_4d23_be51_ba82116f8e1e.slice/crio-99652d0a320769e5f78b146368fc50fbada0a25ffee1169ceb872429c63e6d46 WatchSource:0}: Error finding container 99652d0a320769e5f78b146368fc50fbada0a25ffee1169ceb872429c63e6d46: Status 404 returned error can't find the container with id 99652d0a320769e5f78b146368fc50fbada0a25ffee1169ceb872429c63e6d46 Oct 02 19:15:01 crc kubenswrapper[4832]: I1002 19:15:01.043822 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6"] Oct 02 19:15:01 crc kubenswrapper[4832]: I1002 19:15:01.090118 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" event={"ID":"80f707fb-5732-4d23-be51-ba82116f8e1e","Type":"ContainerStarted","Data":"99652d0a320769e5f78b146368fc50fbada0a25ffee1169ceb872429c63e6d46"} Oct 02 19:15:02 crc kubenswrapper[4832]: I1002 19:15:02.103231 4832 generic.go:334] "Generic (PLEG): container finished" podID="80f707fb-5732-4d23-be51-ba82116f8e1e" containerID="1ea336185c8768f4d6dc1a5a9b3e9314f6edc58a74c2936ad45f7072aea93062" exitCode=0 Oct 02 19:15:02 crc kubenswrapper[4832]: I1002 19:15:02.103373 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" event={"ID":"80f707fb-5732-4d23-be51-ba82116f8e1e","Type":"ContainerDied","Data":"1ea336185c8768f4d6dc1a5a9b3e9314f6edc58a74c2936ad45f7072aea93062"} Oct 02 19:15:03 crc kubenswrapper[4832]: I1002 19:15:03.225355 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:15:03 crc kubenswrapper[4832]: E1002 19:15:03.225791 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:15:03 crc kubenswrapper[4832]: I1002 19:15:03.570931 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" Oct 02 19:15:03 crc kubenswrapper[4832]: I1002 19:15:03.733738 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80f707fb-5732-4d23-be51-ba82116f8e1e-secret-volume\") pod \"80f707fb-5732-4d23-be51-ba82116f8e1e\" (UID: \"80f707fb-5732-4d23-be51-ba82116f8e1e\") " Oct 02 19:15:03 crc kubenswrapper[4832]: I1002 19:15:03.733775 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80f707fb-5732-4d23-be51-ba82116f8e1e-config-volume\") pod \"80f707fb-5732-4d23-be51-ba82116f8e1e\" (UID: \"80f707fb-5732-4d23-be51-ba82116f8e1e\") " Oct 02 19:15:03 crc kubenswrapper[4832]: I1002 19:15:03.733876 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86264\" (UniqueName: \"kubernetes.io/projected/80f707fb-5732-4d23-be51-ba82116f8e1e-kube-api-access-86264\") pod \"80f707fb-5732-4d23-be51-ba82116f8e1e\" (UID: \"80f707fb-5732-4d23-be51-ba82116f8e1e\") " Oct 02 19:15:03 crc kubenswrapper[4832]: I1002 19:15:03.735684 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f707fb-5732-4d23-be51-ba82116f8e1e-config-volume" (OuterVolumeSpecName: "config-volume") pod "80f707fb-5732-4d23-be51-ba82116f8e1e" (UID: "80f707fb-5732-4d23-be51-ba82116f8e1e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 19:15:03 crc kubenswrapper[4832]: I1002 19:15:03.741091 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80f707fb-5732-4d23-be51-ba82116f8e1e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "80f707fb-5732-4d23-be51-ba82116f8e1e" (UID: "80f707fb-5732-4d23-be51-ba82116f8e1e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:15:03 crc kubenswrapper[4832]: I1002 19:15:03.750300 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f707fb-5732-4d23-be51-ba82116f8e1e-kube-api-access-86264" (OuterVolumeSpecName: "kube-api-access-86264") pod "80f707fb-5732-4d23-be51-ba82116f8e1e" (UID: "80f707fb-5732-4d23-be51-ba82116f8e1e"). InnerVolumeSpecName "kube-api-access-86264". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:15:03 crc kubenswrapper[4832]: I1002 19:15:03.837190 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80f707fb-5732-4d23-be51-ba82116f8e1e-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:03 crc kubenswrapper[4832]: I1002 19:15:03.837238 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80f707fb-5732-4d23-be51-ba82116f8e1e-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:03 crc kubenswrapper[4832]: I1002 19:15:03.837253 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86264\" (UniqueName: \"kubernetes.io/projected/80f707fb-5732-4d23-be51-ba82116f8e1e-kube-api-access-86264\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:04 crc kubenswrapper[4832]: I1002 19:15:04.129179 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" event={"ID":"80f707fb-5732-4d23-be51-ba82116f8e1e","Type":"ContainerDied","Data":"99652d0a320769e5f78b146368fc50fbada0a25ffee1169ceb872429c63e6d46"} Oct 02 19:15:04 crc kubenswrapper[4832]: I1002 19:15:04.129506 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99652d0a320769e5f78b146368fc50fbada0a25ffee1169ceb872429c63e6d46" Oct 02 19:15:04 crc kubenswrapper[4832]: I1002 19:15:04.129315 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6" Oct 02 19:15:04 crc kubenswrapper[4832]: E1002 19:15:04.255345 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0afc2c94_7e28_4344_b4be_807607a5c0e4.slice/crio-1d44d9f7de54d503c50ca55b5d9affc89d047fb332c8b3b6cd64ced38f0bdc67.scope\": RecentStats: unable to find data in memory cache]" Oct 02 19:15:04 crc kubenswrapper[4832]: I1002 19:15:04.652791 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc"] Oct 02 19:15:04 crc kubenswrapper[4832]: I1002 19:15:04.663513 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323830-6x2gc"] Oct 02 19:15:05 crc kubenswrapper[4832]: I1002 19:15:05.150885 4832 generic.go:334] "Generic (PLEG): container finished" podID="0afc2c94-7e28-4344-b4be-807607a5c0e4" containerID="1d44d9f7de54d503c50ca55b5d9affc89d047fb332c8b3b6cd64ced38f0bdc67" exitCode=0 Oct 02 19:15:05 crc kubenswrapper[4832]: I1002 19:15:05.150983 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" event={"ID":"0afc2c94-7e28-4344-b4be-807607a5c0e4","Type":"ContainerDied","Data":"1d44d9f7de54d503c50ca55b5d9affc89d047fb332c8b3b6cd64ced38f0bdc67"} Oct 02 19:15:05 crc kubenswrapper[4832]: I1002 19:15:05.247387 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aed091cf-edba-49a6-96fc-878ea590bfa8" path="/var/lib/kubelet/pods/aed091cf-edba-49a6-96fc-878ea590bfa8/volumes" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.685046 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.815122 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-telemetry-power-monitoring-combined-ca-bundle\") pod \"0afc2c94-7e28-4344-b4be-807607a5c0e4\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.815191 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-0\") pod \"0afc2c94-7e28-4344-b4be-807607a5c0e4\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.815254 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w65tm\" (UniqueName: \"kubernetes.io/projected/0afc2c94-7e28-4344-b4be-807607a5c0e4-kube-api-access-w65tm\") pod \"0afc2c94-7e28-4344-b4be-807607a5c0e4\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.815340 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-2\") pod \"0afc2c94-7e28-4344-b4be-807607a5c0e4\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.815413 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ssh-key\") pod \"0afc2c94-7e28-4344-b4be-807607a5c0e4\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.815479 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-1\") pod \"0afc2c94-7e28-4344-b4be-807607a5c0e4\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.815532 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-inventory\") pod \"0afc2c94-7e28-4344-b4be-807607a5c0e4\" (UID: \"0afc2c94-7e28-4344-b4be-807607a5c0e4\") " Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.820967 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "0afc2c94-7e28-4344-b4be-807607a5c0e4" (UID: "0afc2c94-7e28-4344-b4be-807607a5c0e4"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.825889 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0afc2c94-7e28-4344-b4be-807607a5c0e4-kube-api-access-w65tm" (OuterVolumeSpecName: "kube-api-access-w65tm") pod "0afc2c94-7e28-4344-b4be-807607a5c0e4" (UID: "0afc2c94-7e28-4344-b4be-807607a5c0e4"). InnerVolumeSpecName "kube-api-access-w65tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.847123 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-inventory" (OuterVolumeSpecName: "inventory") pod "0afc2c94-7e28-4344-b4be-807607a5c0e4" (UID: "0afc2c94-7e28-4344-b4be-807607a5c0e4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.852418 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "0afc2c94-7e28-4344-b4be-807607a5c0e4" (UID: "0afc2c94-7e28-4344-b4be-807607a5c0e4"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.855674 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "0afc2c94-7e28-4344-b4be-807607a5c0e4" (UID: "0afc2c94-7e28-4344-b4be-807607a5c0e4"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.867352 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0afc2c94-7e28-4344-b4be-807607a5c0e4" (UID: "0afc2c94-7e28-4344-b4be-807607a5c0e4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.867403 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "0afc2c94-7e28-4344-b4be-807607a5c0e4" (UID: "0afc2c94-7e28-4344-b4be-807607a5c0e4"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.917871 4832 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.917905 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.917917 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w65tm\" (UniqueName: \"kubernetes.io/projected/0afc2c94-7e28-4344-b4be-807607a5c0e4-kube-api-access-w65tm\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.917926 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.917935 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.917943 4832 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:06 crc kubenswrapper[4832]: I1002 19:15:06.917954 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0afc2c94-7e28-4344-b4be-807607a5c0e4-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.173860 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" event={"ID":"0afc2c94-7e28-4344-b4be-807607a5c0e4","Type":"ContainerDied","Data":"b0e4d43ec427dd4d4d1f9308c79d2c02c86a2c8a4cc6bcbce62694feaed02c7f"} Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.173910 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0e4d43ec427dd4d4d1f9308c79d2c02c86a2c8a4cc6bcbce62694feaed02c7f" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.173930 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.343609 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88"] Oct 02 19:15:07 crc kubenswrapper[4832]: E1002 19:15:07.344148 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f707fb-5732-4d23-be51-ba82116f8e1e" containerName="collect-profiles" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.344181 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f707fb-5732-4d23-be51-ba82116f8e1e" containerName="collect-profiles" Oct 02 19:15:07 crc kubenswrapper[4832]: E1002 19:15:07.344228 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afc2c94-7e28-4344-b4be-807607a5c0e4" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.344239 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afc2c94-7e28-4344-b4be-807607a5c0e4" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.344538 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f707fb-5732-4d23-be51-ba82116f8e1e" containerName="collect-profiles" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.344574 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0afc2c94-7e28-4344-b4be-807607a5c0e4" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.345405 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.351190 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.351616 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.351640 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.351913 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xl9v6" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.356995 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.357529 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88"] Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.430345 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96928\" (UniqueName: \"kubernetes.io/projected/9e0f6923-879e-41f9-9c8b-f0cfede7221f-kube-api-access-96928\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.430479 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.430559 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.430744 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.430810 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.533938 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.534024 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.534156 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96928\" (UniqueName: \"kubernetes.io/projected/9e0f6923-879e-41f9-9c8b-f0cfede7221f-kube-api-access-96928\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.534228 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.534312 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.538665 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.538903 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.540812 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.542195 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.555436 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96928\" (UniqueName: \"kubernetes.io/projected/9e0f6923-879e-41f9-9c8b-f0cfede7221f-kube-api-access-96928\") pod \"logging-edpm-deployment-openstack-edpm-ipam-r9c88\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:07 crc kubenswrapper[4832]: I1002 19:15:07.691203 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:08 crc kubenswrapper[4832]: I1002 19:15:08.364004 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88"] Oct 02 19:15:09 crc kubenswrapper[4832]: I1002 19:15:09.206938 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" event={"ID":"9e0f6923-879e-41f9-9c8b-f0cfede7221f","Type":"ContainerStarted","Data":"1b6f4682e300dd3c1d93c98f0ab6b024d739201d2148e50f3a546ea4949e761b"} Oct 02 19:15:10 crc kubenswrapper[4832]: I1002 19:15:10.217990 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" event={"ID":"9e0f6923-879e-41f9-9c8b-f0cfede7221f","Type":"ContainerStarted","Data":"505aa2a814ff89345f9af76857d2cc69ce2be5421db218ce5cff692c95cb90fb"} Oct 02 19:15:10 crc kubenswrapper[4832]: I1002 19:15:10.252969 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" podStartSLOduration=2.720926007 podStartE2EDuration="3.252945404s" podCreationTimestamp="2025-10-02 19:15:07 +0000 UTC" firstStartedPulling="2025-10-02 19:15:08.369668829 +0000 UTC m=+3265.339111701" lastFinishedPulling="2025-10-02 19:15:08.901688206 +0000 UTC m=+3265.871131098" observedRunningTime="2025-10-02 19:15:10.237976398 +0000 UTC m=+3267.207419280" watchObservedRunningTime="2025-10-02 19:15:10.252945404 +0000 UTC m=+3267.222388296" Oct 02 19:15:14 crc kubenswrapper[4832]: I1002 19:15:14.225077 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:15:14 crc kubenswrapper[4832]: E1002 19:15:14.226289 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:15:19 crc kubenswrapper[4832]: I1002 19:15:19.597333 4832 scope.go:117] "RemoveContainer" containerID="10ba5474cb84559a020e1d27a7a0af7fd0ebb8b2d5f8e7aecb9b5d0ccc96232f" Oct 02 19:15:26 crc kubenswrapper[4832]: I1002 19:15:26.223159 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:15:26 crc kubenswrapper[4832]: E1002 19:15:26.223936 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:15:26 crc kubenswrapper[4832]: I1002 19:15:26.450948 4832 generic.go:334] "Generic (PLEG): container finished" podID="9e0f6923-879e-41f9-9c8b-f0cfede7221f" containerID="505aa2a814ff89345f9af76857d2cc69ce2be5421db218ce5cff692c95cb90fb" exitCode=0 Oct 02 19:15:26 crc kubenswrapper[4832]: I1002 19:15:26.451057 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" event={"ID":"9e0f6923-879e-41f9-9c8b-f0cfede7221f","Type":"ContainerDied","Data":"505aa2a814ff89345f9af76857d2cc69ce2be5421db218ce5cff692c95cb90fb"} Oct 02 19:15:27 crc kubenswrapper[4832]: I1002 19:15:27.980758 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.120405 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-logging-compute-config-data-0\") pod \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.120625 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-ssh-key\") pod \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.120681 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-inventory\") pod \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.120749 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96928\" (UniqueName: \"kubernetes.io/projected/9e0f6923-879e-41f9-9c8b-f0cfede7221f-kube-api-access-96928\") pod \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.120895 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-logging-compute-config-data-1\") pod \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\" (UID: \"9e0f6923-879e-41f9-9c8b-f0cfede7221f\") " Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.130713 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0f6923-879e-41f9-9c8b-f0cfede7221f-kube-api-access-96928" (OuterVolumeSpecName: "kube-api-access-96928") pod "9e0f6923-879e-41f9-9c8b-f0cfede7221f" (UID: "9e0f6923-879e-41f9-9c8b-f0cfede7221f"). InnerVolumeSpecName "kube-api-access-96928". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.156108 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "9e0f6923-879e-41f9-9c8b-f0cfede7221f" (UID: "9e0f6923-879e-41f9-9c8b-f0cfede7221f"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.178403 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "9e0f6923-879e-41f9-9c8b-f0cfede7221f" (UID: "9e0f6923-879e-41f9-9c8b-f0cfede7221f"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.180100 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9e0f6923-879e-41f9-9c8b-f0cfede7221f" (UID: "9e0f6923-879e-41f9-9c8b-f0cfede7221f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.186937 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-inventory" (OuterVolumeSpecName: "inventory") pod "9e0f6923-879e-41f9-9c8b-f0cfede7221f" (UID: "9e0f6923-879e-41f9-9c8b-f0cfede7221f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.224529 4832 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.224602 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.224621 4832 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-inventory\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.224643 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96928\" (UniqueName: \"kubernetes.io/projected/9e0f6923-879e-41f9-9c8b-f0cfede7221f-kube-api-access-96928\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.224672 4832 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9e0f6923-879e-41f9-9c8b-f0cfede7221f-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.478432 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" event={"ID":"9e0f6923-879e-41f9-9c8b-f0cfede7221f","Type":"ContainerDied","Data":"1b6f4682e300dd3c1d93c98f0ab6b024d739201d2148e50f3a546ea4949e761b"} Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.478491 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b6f4682e300dd3c1d93c98f0ab6b024d739201d2148e50f3a546ea4949e761b" Oct 02 19:15:28 crc kubenswrapper[4832]: I1002 19:15:28.478569 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-r9c88" Oct 02 19:15:39 crc kubenswrapper[4832]: I1002 19:15:39.225382 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:15:39 crc kubenswrapper[4832]: E1002 19:15:39.226633 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:15:50 crc kubenswrapper[4832]: I1002 19:15:50.223037 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:15:50 crc kubenswrapper[4832]: E1002 19:15:50.223708 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:16:03 crc kubenswrapper[4832]: I1002 19:16:03.231961 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:16:03 crc kubenswrapper[4832]: E1002 19:16:03.234351 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:16:15 crc kubenswrapper[4832]: I1002 19:16:15.240938 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:16:15 crc kubenswrapper[4832]: E1002 19:16:15.242009 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:16:28 crc kubenswrapper[4832]: I1002 19:16:28.224028 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:16:28 crc kubenswrapper[4832]: E1002 19:16:28.224887 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:16:41 crc kubenswrapper[4832]: I1002 19:16:41.223615 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:16:41 crc kubenswrapper[4832]: E1002 19:16:41.224419 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:16:53 crc kubenswrapper[4832]: I1002 19:16:53.222859 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:16:53 crc kubenswrapper[4832]: E1002 19:16:53.223636 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:17:04 crc kubenswrapper[4832]: I1002 19:17:04.225581 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:17:04 crc kubenswrapper[4832]: E1002 19:17:04.226611 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.372031 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9649v"] Oct 02 19:17:13 crc kubenswrapper[4832]: E1002 19:17:13.373633 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0f6923-879e-41f9-9c8b-f0cfede7221f" containerName="logging-edpm-deployment-openstack-edpm-ipam" Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.373659 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0f6923-879e-41f9-9c8b-f0cfede7221f" containerName="logging-edpm-deployment-openstack-edpm-ipam" Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.374096 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0f6923-879e-41f9-9c8b-f0cfede7221f" containerName="logging-edpm-deployment-openstack-edpm-ipam" Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.377128 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.396715 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9649v"] Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.441390 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x26fg\" (UniqueName: \"kubernetes.io/projected/6574a0c8-8353-4043-b904-13a5f2eb8d27-kube-api-access-x26fg\") pod \"community-operators-9649v\" (UID: \"6574a0c8-8353-4043-b904-13a5f2eb8d27\") " pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.441586 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6574a0c8-8353-4043-b904-13a5f2eb8d27-utilities\") pod \"community-operators-9649v\" (UID: \"6574a0c8-8353-4043-b904-13a5f2eb8d27\") " pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.441618 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6574a0c8-8353-4043-b904-13a5f2eb8d27-catalog-content\") pod \"community-operators-9649v\" (UID: \"6574a0c8-8353-4043-b904-13a5f2eb8d27\") " pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.543906 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x26fg\" (UniqueName: \"kubernetes.io/projected/6574a0c8-8353-4043-b904-13a5f2eb8d27-kube-api-access-x26fg\") pod \"community-operators-9649v\" (UID: \"6574a0c8-8353-4043-b904-13a5f2eb8d27\") " pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.544188 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6574a0c8-8353-4043-b904-13a5f2eb8d27-utilities\") pod \"community-operators-9649v\" (UID: \"6574a0c8-8353-4043-b904-13a5f2eb8d27\") " pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.544244 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6574a0c8-8353-4043-b904-13a5f2eb8d27-catalog-content\") pod \"community-operators-9649v\" (UID: \"6574a0c8-8353-4043-b904-13a5f2eb8d27\") " pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.544983 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6574a0c8-8353-4043-b904-13a5f2eb8d27-catalog-content\") pod \"community-operators-9649v\" (UID: \"6574a0c8-8353-4043-b904-13a5f2eb8d27\") " pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.545044 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6574a0c8-8353-4043-b904-13a5f2eb8d27-utilities\") pod \"community-operators-9649v\" (UID: \"6574a0c8-8353-4043-b904-13a5f2eb8d27\") " pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.577420 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x26fg\" (UniqueName: \"kubernetes.io/projected/6574a0c8-8353-4043-b904-13a5f2eb8d27-kube-api-access-x26fg\") pod \"community-operators-9649v\" (UID: \"6574a0c8-8353-4043-b904-13a5f2eb8d27\") " pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:13 crc kubenswrapper[4832]: I1002 19:17:13.721165 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:14 crc kubenswrapper[4832]: I1002 19:17:14.316455 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9649v"] Oct 02 19:17:14 crc kubenswrapper[4832]: I1002 19:17:14.947524 4832 generic.go:334] "Generic (PLEG): container finished" podID="6574a0c8-8353-4043-b904-13a5f2eb8d27" containerID="15a80a3b19156e97a35438b273eb247a79f735893a832c64f878b3c6570ef772" exitCode=0 Oct 02 19:17:14 crc kubenswrapper[4832]: I1002 19:17:14.947624 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9649v" event={"ID":"6574a0c8-8353-4043-b904-13a5f2eb8d27","Type":"ContainerDied","Data":"15a80a3b19156e97a35438b273eb247a79f735893a832c64f878b3c6570ef772"} Oct 02 19:17:14 crc kubenswrapper[4832]: I1002 19:17:14.949006 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9649v" event={"ID":"6574a0c8-8353-4043-b904-13a5f2eb8d27","Type":"ContainerStarted","Data":"c172fc08748116093b573c540a28f3f676b82caf2acbd55057408704af541077"} Oct 02 19:17:15 crc kubenswrapper[4832]: I1002 19:17:15.964157 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9649v" event={"ID":"6574a0c8-8353-4043-b904-13a5f2eb8d27","Type":"ContainerStarted","Data":"42c80ed68275c54df56cf2e39743240f7430231e9e3230e9c42c6b78b0fe1d91"} Oct 02 19:17:17 crc kubenswrapper[4832]: I1002 19:17:17.994243 4832 generic.go:334] "Generic (PLEG): container finished" podID="6574a0c8-8353-4043-b904-13a5f2eb8d27" containerID="42c80ed68275c54df56cf2e39743240f7430231e9e3230e9c42c6b78b0fe1d91" exitCode=0 Oct 02 19:17:17 crc kubenswrapper[4832]: I1002 19:17:17.994358 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9649v" event={"ID":"6574a0c8-8353-4043-b904-13a5f2eb8d27","Type":"ContainerDied","Data":"42c80ed68275c54df56cf2e39743240f7430231e9e3230e9c42c6b78b0fe1d91"} Oct 02 19:17:19 crc kubenswrapper[4832]: I1002 19:17:19.008231 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9649v" event={"ID":"6574a0c8-8353-4043-b904-13a5f2eb8d27","Type":"ContainerStarted","Data":"6f1250c27a31531b847bce0d79cf8355499650db19a1f345ea8c8e6bad290de3"} Oct 02 19:17:19 crc kubenswrapper[4832]: I1002 19:17:19.043653 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9649v" podStartSLOduration=2.460013087 podStartE2EDuration="6.043636768s" podCreationTimestamp="2025-10-02 19:17:13 +0000 UTC" firstStartedPulling="2025-10-02 19:17:14.949698816 +0000 UTC m=+3391.919141688" lastFinishedPulling="2025-10-02 19:17:18.533322487 +0000 UTC m=+3395.502765369" observedRunningTime="2025-10-02 19:17:19.041903665 +0000 UTC m=+3396.011346557" watchObservedRunningTime="2025-10-02 19:17:19.043636768 +0000 UTC m=+3396.013079640" Oct 02 19:17:19 crc kubenswrapper[4832]: I1002 19:17:19.223646 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:17:19 crc kubenswrapper[4832]: E1002 19:17:19.223960 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:17:23 crc kubenswrapper[4832]: I1002 19:17:23.722022 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:23 crc kubenswrapper[4832]: I1002 19:17:23.722664 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:23 crc kubenswrapper[4832]: I1002 19:17:23.798683 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:24 crc kubenswrapper[4832]: I1002 19:17:24.130335 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:24 crc kubenswrapper[4832]: I1002 19:17:24.201164 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9649v"] Oct 02 19:17:26 crc kubenswrapper[4832]: I1002 19:17:26.080761 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9649v" podUID="6574a0c8-8353-4043-b904-13a5f2eb8d27" containerName="registry-server" containerID="cri-o://6f1250c27a31531b847bce0d79cf8355499650db19a1f345ea8c8e6bad290de3" gracePeriod=2 Oct 02 19:17:26 crc kubenswrapper[4832]: I1002 19:17:26.622086 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:26 crc kubenswrapper[4832]: I1002 19:17:26.700192 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x26fg\" (UniqueName: \"kubernetes.io/projected/6574a0c8-8353-4043-b904-13a5f2eb8d27-kube-api-access-x26fg\") pod \"6574a0c8-8353-4043-b904-13a5f2eb8d27\" (UID: \"6574a0c8-8353-4043-b904-13a5f2eb8d27\") " Oct 02 19:17:26 crc kubenswrapper[4832]: I1002 19:17:26.700240 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6574a0c8-8353-4043-b904-13a5f2eb8d27-utilities\") pod \"6574a0c8-8353-4043-b904-13a5f2eb8d27\" (UID: \"6574a0c8-8353-4043-b904-13a5f2eb8d27\") " Oct 02 19:17:26 crc kubenswrapper[4832]: I1002 19:17:26.700440 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6574a0c8-8353-4043-b904-13a5f2eb8d27-catalog-content\") pod \"6574a0c8-8353-4043-b904-13a5f2eb8d27\" (UID: \"6574a0c8-8353-4043-b904-13a5f2eb8d27\") " Oct 02 19:17:26 crc kubenswrapper[4832]: I1002 19:17:26.704015 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6574a0c8-8353-4043-b904-13a5f2eb8d27-utilities" (OuterVolumeSpecName: "utilities") pod "6574a0c8-8353-4043-b904-13a5f2eb8d27" (UID: "6574a0c8-8353-4043-b904-13a5f2eb8d27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:17:26 crc kubenswrapper[4832]: I1002 19:17:26.714576 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6574a0c8-8353-4043-b904-13a5f2eb8d27-kube-api-access-x26fg" (OuterVolumeSpecName: "kube-api-access-x26fg") pod "6574a0c8-8353-4043-b904-13a5f2eb8d27" (UID: "6574a0c8-8353-4043-b904-13a5f2eb8d27"). InnerVolumeSpecName "kube-api-access-x26fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:17:26 crc kubenswrapper[4832]: I1002 19:17:26.780089 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6574a0c8-8353-4043-b904-13a5f2eb8d27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6574a0c8-8353-4043-b904-13a5f2eb8d27" (UID: "6574a0c8-8353-4043-b904-13a5f2eb8d27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:17:26 crc kubenswrapper[4832]: I1002 19:17:26.803978 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x26fg\" (UniqueName: \"kubernetes.io/projected/6574a0c8-8353-4043-b904-13a5f2eb8d27-kube-api-access-x26fg\") on node \"crc\" DevicePath \"\"" Oct 02 19:17:26 crc kubenswrapper[4832]: I1002 19:17:26.804011 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6574a0c8-8353-4043-b904-13a5f2eb8d27-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:17:26 crc kubenswrapper[4832]: I1002 19:17:26.804021 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6574a0c8-8353-4043-b904-13a5f2eb8d27-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.095606 4832 generic.go:334] "Generic (PLEG): container finished" podID="6574a0c8-8353-4043-b904-13a5f2eb8d27" containerID="6f1250c27a31531b847bce0d79cf8355499650db19a1f345ea8c8e6bad290de3" exitCode=0 Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.095669 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9649v" event={"ID":"6574a0c8-8353-4043-b904-13a5f2eb8d27","Type":"ContainerDied","Data":"6f1250c27a31531b847bce0d79cf8355499650db19a1f345ea8c8e6bad290de3"} Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.095735 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9649v" event={"ID":"6574a0c8-8353-4043-b904-13a5f2eb8d27","Type":"ContainerDied","Data":"c172fc08748116093b573c540a28f3f676b82caf2acbd55057408704af541077"} Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.095745 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9649v" Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.095768 4832 scope.go:117] "RemoveContainer" containerID="6f1250c27a31531b847bce0d79cf8355499650db19a1f345ea8c8e6bad290de3" Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.118364 4832 scope.go:117] "RemoveContainer" containerID="42c80ed68275c54df56cf2e39743240f7430231e9e3230e9c42c6b78b0fe1d91" Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.192053 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9649v"] Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.192138 4832 scope.go:117] "RemoveContainer" containerID="15a80a3b19156e97a35438b273eb247a79f735893a832c64f878b3c6570ef772" Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.211325 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9649v"] Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.235711 4832 scope.go:117] "RemoveContainer" containerID="6f1250c27a31531b847bce0d79cf8355499650db19a1f345ea8c8e6bad290de3" Oct 02 19:17:27 crc kubenswrapper[4832]: E1002 19:17:27.236409 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f1250c27a31531b847bce0d79cf8355499650db19a1f345ea8c8e6bad290de3\": container with ID starting with 6f1250c27a31531b847bce0d79cf8355499650db19a1f345ea8c8e6bad290de3 not found: ID does not exist" containerID="6f1250c27a31531b847bce0d79cf8355499650db19a1f345ea8c8e6bad290de3" Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.236482 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1250c27a31531b847bce0d79cf8355499650db19a1f345ea8c8e6bad290de3"} err="failed to get container status \"6f1250c27a31531b847bce0d79cf8355499650db19a1f345ea8c8e6bad290de3\": rpc error: code = NotFound desc = could not find container \"6f1250c27a31531b847bce0d79cf8355499650db19a1f345ea8c8e6bad290de3\": container with ID starting with 6f1250c27a31531b847bce0d79cf8355499650db19a1f345ea8c8e6bad290de3 not found: ID does not exist" Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.236526 4832 scope.go:117] "RemoveContainer" containerID="42c80ed68275c54df56cf2e39743240f7430231e9e3230e9c42c6b78b0fe1d91" Oct 02 19:17:27 crc kubenswrapper[4832]: E1002 19:17:27.237805 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42c80ed68275c54df56cf2e39743240f7430231e9e3230e9c42c6b78b0fe1d91\": container with ID starting with 42c80ed68275c54df56cf2e39743240f7430231e9e3230e9c42c6b78b0fe1d91 not found: ID does not exist" containerID="42c80ed68275c54df56cf2e39743240f7430231e9e3230e9c42c6b78b0fe1d91" Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.237849 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42c80ed68275c54df56cf2e39743240f7430231e9e3230e9c42c6b78b0fe1d91"} err="failed to get container status \"42c80ed68275c54df56cf2e39743240f7430231e9e3230e9c42c6b78b0fe1d91\": rpc error: code = NotFound desc = could not find container \"42c80ed68275c54df56cf2e39743240f7430231e9e3230e9c42c6b78b0fe1d91\": container with ID starting with 42c80ed68275c54df56cf2e39743240f7430231e9e3230e9c42c6b78b0fe1d91 not found: ID does not exist" Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.237876 4832 scope.go:117] "RemoveContainer" containerID="15a80a3b19156e97a35438b273eb247a79f735893a832c64f878b3c6570ef772" Oct 02 19:17:27 crc kubenswrapper[4832]: E1002 19:17:27.241486 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15a80a3b19156e97a35438b273eb247a79f735893a832c64f878b3c6570ef772\": container with ID starting with 15a80a3b19156e97a35438b273eb247a79f735893a832c64f878b3c6570ef772 not found: ID does not exist" containerID="15a80a3b19156e97a35438b273eb247a79f735893a832c64f878b3c6570ef772" Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.241528 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15a80a3b19156e97a35438b273eb247a79f735893a832c64f878b3c6570ef772"} err="failed to get container status \"15a80a3b19156e97a35438b273eb247a79f735893a832c64f878b3c6570ef772\": rpc error: code = NotFound desc = could not find container \"15a80a3b19156e97a35438b273eb247a79f735893a832c64f878b3c6570ef772\": container with ID starting with 15a80a3b19156e97a35438b273eb247a79f735893a832c64f878b3c6570ef772 not found: ID does not exist" Oct 02 19:17:27 crc kubenswrapper[4832]: I1002 19:17:27.243870 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6574a0c8-8353-4043-b904-13a5f2eb8d27" path="/var/lib/kubelet/pods/6574a0c8-8353-4043-b904-13a5f2eb8d27/volumes" Oct 02 19:17:27 crc kubenswrapper[4832]: E1002 19:17:27.343145 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6574a0c8_8353_4043_b904_13a5f2eb8d27.slice/crio-c172fc08748116093b573c540a28f3f676b82caf2acbd55057408704af541077\": RecentStats: unable to find data in memory cache]" Oct 02 19:17:31 crc kubenswrapper[4832]: I1002 19:17:31.223528 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:17:31 crc kubenswrapper[4832]: E1002 19:17:31.224307 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:17:44 crc kubenswrapper[4832]: I1002 19:17:44.226326 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:17:44 crc kubenswrapper[4832]: E1002 19:17:44.227768 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:17:57 crc kubenswrapper[4832]: I1002 19:17:57.223746 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:17:57 crc kubenswrapper[4832]: I1002 19:17:57.520440 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"5c75447414fde7d9ee4905d41818ed0cf89bd7f0d47454db8b2a2692d457c8df"} Oct 02 19:19:19 crc kubenswrapper[4832]: I1002 19:19:19.814759 4832 scope.go:117] "RemoveContainer" containerID="8d523afe09c23f4d51d3437579ffe83368b171659610046886190883507177f0" Oct 02 19:19:19 crc kubenswrapper[4832]: I1002 19:19:19.840851 4832 scope.go:117] "RemoveContainer" containerID="969a89f07a6a8e2675a61c537207f3e9caad8f77337c5f05f1239a7fab55a409" Oct 02 19:19:19 crc kubenswrapper[4832]: I1002 19:19:19.938204 4832 scope.go:117] "RemoveContainer" containerID="88297e8a723689fdeca12b6a445f059a6087e47c5764f7263605aacc5928e4ed" Oct 02 19:20:26 crc kubenswrapper[4832]: I1002 19:20:26.875789 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:20:26 crc kubenswrapper[4832]: I1002 19:20:26.876241 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:20:56 crc kubenswrapper[4832]: I1002 19:20:56.875287 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:20:56 crc kubenswrapper[4832]: I1002 19:20:56.875950 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:21:26 crc kubenswrapper[4832]: I1002 19:21:26.875337 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:21:26 crc kubenswrapper[4832]: I1002 19:21:26.875849 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:21:26 crc kubenswrapper[4832]: I1002 19:21:26.875895 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 19:21:26 crc kubenswrapper[4832]: I1002 19:21:26.876753 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c75447414fde7d9ee4905d41818ed0cf89bd7f0d47454db8b2a2692d457c8df"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 19:21:26 crc kubenswrapper[4832]: I1002 19:21:26.876807 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://5c75447414fde7d9ee4905d41818ed0cf89bd7f0d47454db8b2a2692d457c8df" gracePeriod=600 Oct 02 19:21:27 crc kubenswrapper[4832]: I1002 19:21:27.246833 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="5c75447414fde7d9ee4905d41818ed0cf89bd7f0d47454db8b2a2692d457c8df" exitCode=0 Oct 02 19:21:27 crc kubenswrapper[4832]: I1002 19:21:27.246849 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"5c75447414fde7d9ee4905d41818ed0cf89bd7f0d47454db8b2a2692d457c8df"} Oct 02 19:21:27 crc kubenswrapper[4832]: I1002 19:21:27.247237 4832 scope.go:117] "RemoveContainer" containerID="b6346a80ae93a1f3499dbcb60bdd695bc602ce7b291e32eed7090fdce3d93785" Oct 02 19:21:28 crc kubenswrapper[4832]: I1002 19:21:28.259328 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17"} Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.357128 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j2mxc"] Oct 02 19:23:23 crc kubenswrapper[4832]: E1002 19:23:23.358843 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6574a0c8-8353-4043-b904-13a5f2eb8d27" containerName="registry-server" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.358865 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6574a0c8-8353-4043-b904-13a5f2eb8d27" containerName="registry-server" Oct 02 19:23:23 crc kubenswrapper[4832]: E1002 19:23:23.358892 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6574a0c8-8353-4043-b904-13a5f2eb8d27" containerName="extract-utilities" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.358900 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6574a0c8-8353-4043-b904-13a5f2eb8d27" containerName="extract-utilities" Oct 02 19:23:23 crc kubenswrapper[4832]: E1002 19:23:23.358922 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6574a0c8-8353-4043-b904-13a5f2eb8d27" containerName="extract-content" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.358930 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6574a0c8-8353-4043-b904-13a5f2eb8d27" containerName="extract-content" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.359276 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6574a0c8-8353-4043-b904-13a5f2eb8d27" containerName="registry-server" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.361390 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.386284 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2mxc"] Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.472874 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpwr6\" (UniqueName: \"kubernetes.io/projected/4b30ce9e-e533-4101-88cf-2e63bd5b4000-kube-api-access-jpwr6\") pod \"redhat-marketplace-j2mxc\" (UID: \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\") " pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.473075 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b30ce9e-e533-4101-88cf-2e63bd5b4000-utilities\") pod \"redhat-marketplace-j2mxc\" (UID: \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\") " pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.473295 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b30ce9e-e533-4101-88cf-2e63bd5b4000-catalog-content\") pod \"redhat-marketplace-j2mxc\" (UID: \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\") " pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.575160 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpwr6\" (UniqueName: \"kubernetes.io/projected/4b30ce9e-e533-4101-88cf-2e63bd5b4000-kube-api-access-jpwr6\") pod \"redhat-marketplace-j2mxc\" (UID: \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\") " pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.575331 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b30ce9e-e533-4101-88cf-2e63bd5b4000-utilities\") pod \"redhat-marketplace-j2mxc\" (UID: \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\") " pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.575755 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b30ce9e-e533-4101-88cf-2e63bd5b4000-utilities\") pod \"redhat-marketplace-j2mxc\" (UID: \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\") " pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.575894 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b30ce9e-e533-4101-88cf-2e63bd5b4000-catalog-content\") pod \"redhat-marketplace-j2mxc\" (UID: \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\") " pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.576129 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b30ce9e-e533-4101-88cf-2e63bd5b4000-catalog-content\") pod \"redhat-marketplace-j2mxc\" (UID: \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\") " pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.594839 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpwr6\" (UniqueName: \"kubernetes.io/projected/4b30ce9e-e533-4101-88cf-2e63bd5b4000-kube-api-access-jpwr6\") pod \"redhat-marketplace-j2mxc\" (UID: \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\") " pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:23 crc kubenswrapper[4832]: I1002 19:23:23.684495 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:24 crc kubenswrapper[4832]: I1002 19:23:24.231141 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2mxc"] Oct 02 19:23:24 crc kubenswrapper[4832]: I1002 19:23:24.871146 4832 generic.go:334] "Generic (PLEG): container finished" podID="4b30ce9e-e533-4101-88cf-2e63bd5b4000" containerID="380be78c15895fee80f1a95ac842e66f75196a2bfff21448671676f897c8253a" exitCode=0 Oct 02 19:23:24 crc kubenswrapper[4832]: I1002 19:23:24.871245 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2mxc" event={"ID":"4b30ce9e-e533-4101-88cf-2e63bd5b4000","Type":"ContainerDied","Data":"380be78c15895fee80f1a95ac842e66f75196a2bfff21448671676f897c8253a"} Oct 02 19:23:24 crc kubenswrapper[4832]: I1002 19:23:24.871501 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2mxc" event={"ID":"4b30ce9e-e533-4101-88cf-2e63bd5b4000","Type":"ContainerStarted","Data":"6e889e0990e25648602d6e390ebac365804c2a99696ff4a6a86399da8da2b66f"} Oct 02 19:23:24 crc kubenswrapper[4832]: I1002 19:23:24.873409 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 19:23:26 crc kubenswrapper[4832]: I1002 19:23:26.894487 4832 generic.go:334] "Generic (PLEG): container finished" podID="4b30ce9e-e533-4101-88cf-2e63bd5b4000" containerID="791e14bbdfb61fe7bfc4ddb7514ad9e4d12ac813baa4236e9c10874f72ee27cb" exitCode=0 Oct 02 19:23:26 crc kubenswrapper[4832]: I1002 19:23:26.894542 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2mxc" event={"ID":"4b30ce9e-e533-4101-88cf-2e63bd5b4000","Type":"ContainerDied","Data":"791e14bbdfb61fe7bfc4ddb7514ad9e4d12ac813baa4236e9c10874f72ee27cb"} Oct 02 19:23:27 crc kubenswrapper[4832]: I1002 19:23:27.906944 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2mxc" event={"ID":"4b30ce9e-e533-4101-88cf-2e63bd5b4000","Type":"ContainerStarted","Data":"6d9832e59f0fcbc2ae4c101b4970a4bbb6d672de030c105a1bfafb66553aa271"} Oct 02 19:23:27 crc kubenswrapper[4832]: I1002 19:23:27.953005 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j2mxc" podStartSLOduration=2.51844486 podStartE2EDuration="4.952988107s" podCreationTimestamp="2025-10-02 19:23:23 +0000 UTC" firstStartedPulling="2025-10-02 19:23:24.873172494 +0000 UTC m=+3761.842615366" lastFinishedPulling="2025-10-02 19:23:27.307715741 +0000 UTC m=+3764.277158613" observedRunningTime="2025-10-02 19:23:27.938752526 +0000 UTC m=+3764.908195398" watchObservedRunningTime="2025-10-02 19:23:27.952988107 +0000 UTC m=+3764.922430979" Oct 02 19:23:33 crc kubenswrapper[4832]: I1002 19:23:33.686126 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:33 crc kubenswrapper[4832]: I1002 19:23:33.686739 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:33 crc kubenswrapper[4832]: I1002 19:23:33.741841 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:34 crc kubenswrapper[4832]: I1002 19:23:34.080702 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:34 crc kubenswrapper[4832]: I1002 19:23:34.146111 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2mxc"] Oct 02 19:23:35 crc kubenswrapper[4832]: I1002 19:23:35.995617 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j2mxc" podUID="4b30ce9e-e533-4101-88cf-2e63bd5b4000" containerName="registry-server" containerID="cri-o://6d9832e59f0fcbc2ae4c101b4970a4bbb6d672de030c105a1bfafb66553aa271" gracePeriod=2 Oct 02 19:23:37 crc kubenswrapper[4832]: I1002 19:23:37.019181 4832 generic.go:334] "Generic (PLEG): container finished" podID="4b30ce9e-e533-4101-88cf-2e63bd5b4000" containerID="6d9832e59f0fcbc2ae4c101b4970a4bbb6d672de030c105a1bfafb66553aa271" exitCode=0 Oct 02 19:23:37 crc kubenswrapper[4832]: I1002 19:23:37.019308 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2mxc" event={"ID":"4b30ce9e-e533-4101-88cf-2e63bd5b4000","Type":"ContainerDied","Data":"6d9832e59f0fcbc2ae4c101b4970a4bbb6d672de030c105a1bfafb66553aa271"} Oct 02 19:23:37 crc kubenswrapper[4832]: I1002 19:23:37.512834 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:37 crc kubenswrapper[4832]: I1002 19:23:37.630622 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b30ce9e-e533-4101-88cf-2e63bd5b4000-utilities\") pod \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\" (UID: \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\") " Oct 02 19:23:37 crc kubenswrapper[4832]: I1002 19:23:37.630726 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b30ce9e-e533-4101-88cf-2e63bd5b4000-catalog-content\") pod \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\" (UID: \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\") " Oct 02 19:23:37 crc kubenswrapper[4832]: I1002 19:23:37.630782 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpwr6\" (UniqueName: \"kubernetes.io/projected/4b30ce9e-e533-4101-88cf-2e63bd5b4000-kube-api-access-jpwr6\") pod \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\" (UID: \"4b30ce9e-e533-4101-88cf-2e63bd5b4000\") " Oct 02 19:23:37 crc kubenswrapper[4832]: I1002 19:23:37.631343 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b30ce9e-e533-4101-88cf-2e63bd5b4000-utilities" (OuterVolumeSpecName: "utilities") pod "4b30ce9e-e533-4101-88cf-2e63bd5b4000" (UID: "4b30ce9e-e533-4101-88cf-2e63bd5b4000"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:23:37 crc kubenswrapper[4832]: I1002 19:23:37.631824 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b30ce9e-e533-4101-88cf-2e63bd5b4000-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:23:37 crc kubenswrapper[4832]: I1002 19:23:37.641483 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b30ce9e-e533-4101-88cf-2e63bd5b4000-kube-api-access-jpwr6" (OuterVolumeSpecName: "kube-api-access-jpwr6") pod "4b30ce9e-e533-4101-88cf-2e63bd5b4000" (UID: "4b30ce9e-e533-4101-88cf-2e63bd5b4000"). InnerVolumeSpecName "kube-api-access-jpwr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:23:37 crc kubenswrapper[4832]: I1002 19:23:37.646771 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b30ce9e-e533-4101-88cf-2e63bd5b4000-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b30ce9e-e533-4101-88cf-2e63bd5b4000" (UID: "4b30ce9e-e533-4101-88cf-2e63bd5b4000"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:23:37 crc kubenswrapper[4832]: I1002 19:23:37.734789 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b30ce9e-e533-4101-88cf-2e63bd5b4000-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:23:37 crc kubenswrapper[4832]: I1002 19:23:37.734827 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpwr6\" (UniqueName: \"kubernetes.io/projected/4b30ce9e-e533-4101-88cf-2e63bd5b4000-kube-api-access-jpwr6\") on node \"crc\" DevicePath \"\"" Oct 02 19:23:38 crc kubenswrapper[4832]: I1002 19:23:38.034437 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2mxc" event={"ID":"4b30ce9e-e533-4101-88cf-2e63bd5b4000","Type":"ContainerDied","Data":"6e889e0990e25648602d6e390ebac365804c2a99696ff4a6a86399da8da2b66f"} Oct 02 19:23:38 crc kubenswrapper[4832]: I1002 19:23:38.034779 4832 scope.go:117] "RemoveContainer" containerID="6d9832e59f0fcbc2ae4c101b4970a4bbb6d672de030c105a1bfafb66553aa271" Oct 02 19:23:38 crc kubenswrapper[4832]: I1002 19:23:38.034935 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2mxc" Oct 02 19:23:38 crc kubenswrapper[4832]: I1002 19:23:38.072249 4832 scope.go:117] "RemoveContainer" containerID="791e14bbdfb61fe7bfc4ddb7514ad9e4d12ac813baa4236e9c10874f72ee27cb" Oct 02 19:23:38 crc kubenswrapper[4832]: I1002 19:23:38.087843 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2mxc"] Oct 02 19:23:38 crc kubenswrapper[4832]: I1002 19:23:38.100575 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2mxc"] Oct 02 19:23:38 crc kubenswrapper[4832]: I1002 19:23:38.106845 4832 scope.go:117] "RemoveContainer" containerID="380be78c15895fee80f1a95ac842e66f75196a2bfff21448671676f897c8253a" Oct 02 19:23:39 crc kubenswrapper[4832]: I1002 19:23:39.236406 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b30ce9e-e533-4101-88cf-2e63bd5b4000" path="/var/lib/kubelet/pods/4b30ce9e-e533-4101-88cf-2e63bd5b4000/volumes" Oct 02 19:23:56 crc kubenswrapper[4832]: I1002 19:23:56.876035 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:23:56 crc kubenswrapper[4832]: I1002 19:23:56.876663 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.745087 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-knvkv"] Oct 02 19:24:01 crc kubenswrapper[4832]: E1002 19:24:01.749057 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b30ce9e-e533-4101-88cf-2e63bd5b4000" containerName="extract-utilities" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.749200 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b30ce9e-e533-4101-88cf-2e63bd5b4000" containerName="extract-utilities" Oct 02 19:24:01 crc kubenswrapper[4832]: E1002 19:24:01.749349 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b30ce9e-e533-4101-88cf-2e63bd5b4000" containerName="extract-content" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.749444 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b30ce9e-e533-4101-88cf-2e63bd5b4000" containerName="extract-content" Oct 02 19:24:01 crc kubenswrapper[4832]: E1002 19:24:01.749542 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b30ce9e-e533-4101-88cf-2e63bd5b4000" containerName="registry-server" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.749629 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b30ce9e-e533-4101-88cf-2e63bd5b4000" containerName="registry-server" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.750306 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b30ce9e-e533-4101-88cf-2e63bd5b4000" containerName="registry-server" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.756006 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.758108 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-knvkv"] Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.836154 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ad5a9d-d3c6-4300-b054-11adcb392e9c-utilities\") pod \"redhat-operators-knvkv\" (UID: \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\") " pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.836331 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ad5a9d-d3c6-4300-b054-11adcb392e9c-catalog-content\") pod \"redhat-operators-knvkv\" (UID: \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\") " pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.836374 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmjwv\" (UniqueName: \"kubernetes.io/projected/47ad5a9d-d3c6-4300-b054-11adcb392e9c-kube-api-access-nmjwv\") pod \"redhat-operators-knvkv\" (UID: \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\") " pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.938878 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ad5a9d-d3c6-4300-b054-11adcb392e9c-catalog-content\") pod \"redhat-operators-knvkv\" (UID: \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\") " pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.939547 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmjwv\" (UniqueName: \"kubernetes.io/projected/47ad5a9d-d3c6-4300-b054-11adcb392e9c-kube-api-access-nmjwv\") pod \"redhat-operators-knvkv\" (UID: \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\") " pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.939551 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ad5a9d-d3c6-4300-b054-11adcb392e9c-catalog-content\") pod \"redhat-operators-knvkv\" (UID: \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\") " pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.940080 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ad5a9d-d3c6-4300-b054-11adcb392e9c-utilities\") pod \"redhat-operators-knvkv\" (UID: \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\") " pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.940441 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ad5a9d-d3c6-4300-b054-11adcb392e9c-utilities\") pod \"redhat-operators-knvkv\" (UID: \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\") " pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:01 crc kubenswrapper[4832]: I1002 19:24:01.963316 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmjwv\" (UniqueName: \"kubernetes.io/projected/47ad5a9d-d3c6-4300-b054-11adcb392e9c-kube-api-access-nmjwv\") pod \"redhat-operators-knvkv\" (UID: \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\") " pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:02 crc kubenswrapper[4832]: I1002 19:24:02.086360 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:02 crc kubenswrapper[4832]: I1002 19:24:02.589153 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-knvkv"] Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.142863 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lsmvc"] Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.147281 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.156427 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lsmvc"] Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.210287 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v89s7\" (UniqueName: \"kubernetes.io/projected/5e68e565-7045-4e5e-bcb3-e2bf87145d51-kube-api-access-v89s7\") pod \"certified-operators-lsmvc\" (UID: \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\") " pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.210334 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e68e565-7045-4e5e-bcb3-e2bf87145d51-utilities\") pod \"certified-operators-lsmvc\" (UID: \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\") " pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.210472 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e68e565-7045-4e5e-bcb3-e2bf87145d51-catalog-content\") pod \"certified-operators-lsmvc\" (UID: \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\") " pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.312661 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v89s7\" (UniqueName: \"kubernetes.io/projected/5e68e565-7045-4e5e-bcb3-e2bf87145d51-kube-api-access-v89s7\") pod \"certified-operators-lsmvc\" (UID: \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\") " pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.312717 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e68e565-7045-4e5e-bcb3-e2bf87145d51-utilities\") pod \"certified-operators-lsmvc\" (UID: \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\") " pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.313204 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e68e565-7045-4e5e-bcb3-e2bf87145d51-utilities\") pod \"certified-operators-lsmvc\" (UID: \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\") " pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.313842 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e68e565-7045-4e5e-bcb3-e2bf87145d51-catalog-content\") pod \"certified-operators-lsmvc\" (UID: \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\") " pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.314202 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e68e565-7045-4e5e-bcb3-e2bf87145d51-catalog-content\") pod \"certified-operators-lsmvc\" (UID: \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\") " pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.327133 4832 generic.go:334] "Generic (PLEG): container finished" podID="47ad5a9d-d3c6-4300-b054-11adcb392e9c" containerID="b34179f9fb05824b744d75cbf4d0142a03f30ddfaf27ebc67474735a8983e586" exitCode=0 Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.327193 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knvkv" event={"ID":"47ad5a9d-d3c6-4300-b054-11adcb392e9c","Type":"ContainerDied","Data":"b34179f9fb05824b744d75cbf4d0142a03f30ddfaf27ebc67474735a8983e586"} Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.327217 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knvkv" event={"ID":"47ad5a9d-d3c6-4300-b054-11adcb392e9c","Type":"ContainerStarted","Data":"a390787c0f6c03a7f1aef63cc44c2d23d89b011517ce743bd476b1bede989eb5"} Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.357251 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v89s7\" (UniqueName: \"kubernetes.io/projected/5e68e565-7045-4e5e-bcb3-e2bf87145d51-kube-api-access-v89s7\") pod \"certified-operators-lsmvc\" (UID: \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\") " pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:03 crc kubenswrapper[4832]: I1002 19:24:03.469409 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:04 crc kubenswrapper[4832]: I1002 19:24:04.020775 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lsmvc"] Oct 02 19:24:04 crc kubenswrapper[4832]: I1002 19:24:04.354156 4832 generic.go:334] "Generic (PLEG): container finished" podID="5e68e565-7045-4e5e-bcb3-e2bf87145d51" containerID="6583fc6c6f1a3be0b9d6d2ca46e97421d2326b2a3a1ed1b5cce99028b8867bf3" exitCode=0 Oct 02 19:24:04 crc kubenswrapper[4832]: I1002 19:24:04.354608 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsmvc" event={"ID":"5e68e565-7045-4e5e-bcb3-e2bf87145d51","Type":"ContainerDied","Data":"6583fc6c6f1a3be0b9d6d2ca46e97421d2326b2a3a1ed1b5cce99028b8867bf3"} Oct 02 19:24:04 crc kubenswrapper[4832]: I1002 19:24:04.354638 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsmvc" event={"ID":"5e68e565-7045-4e5e-bcb3-e2bf87145d51","Type":"ContainerStarted","Data":"1889c933eee07d4a4b5809c1779eac102b8ef82f3f4d9e8fccb9612f8486ce24"} Oct 02 19:24:05 crc kubenswrapper[4832]: I1002 19:24:05.371073 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knvkv" event={"ID":"47ad5a9d-d3c6-4300-b054-11adcb392e9c","Type":"ContainerStarted","Data":"3d3aab96453732f83e8779a7fb3145b6447f53381b08905a82414f1b60c0846f"} Oct 02 19:24:06 crc kubenswrapper[4832]: I1002 19:24:06.402371 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsmvc" event={"ID":"5e68e565-7045-4e5e-bcb3-e2bf87145d51","Type":"ContainerStarted","Data":"3bdcd0e2b7d02cf37210b42d1138b2b3addad07840af28a563b055bea7c06116"} Oct 02 19:24:08 crc kubenswrapper[4832]: I1002 19:24:08.433697 4832 generic.go:334] "Generic (PLEG): container finished" podID="5e68e565-7045-4e5e-bcb3-e2bf87145d51" containerID="3bdcd0e2b7d02cf37210b42d1138b2b3addad07840af28a563b055bea7c06116" exitCode=0 Oct 02 19:24:08 crc kubenswrapper[4832]: I1002 19:24:08.433758 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsmvc" event={"ID":"5e68e565-7045-4e5e-bcb3-e2bf87145d51","Type":"ContainerDied","Data":"3bdcd0e2b7d02cf37210b42d1138b2b3addad07840af28a563b055bea7c06116"} Oct 02 19:24:09 crc kubenswrapper[4832]: I1002 19:24:09.450979 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsmvc" event={"ID":"5e68e565-7045-4e5e-bcb3-e2bf87145d51","Type":"ContainerStarted","Data":"4c760e7d00391d5b792f684ca82c4a9c8832c07dbcd6e7de93b215cc00df3c55"} Oct 02 19:24:09 crc kubenswrapper[4832]: I1002 19:24:09.453492 4832 generic.go:334] "Generic (PLEG): container finished" podID="47ad5a9d-d3c6-4300-b054-11adcb392e9c" containerID="3d3aab96453732f83e8779a7fb3145b6447f53381b08905a82414f1b60c0846f" exitCode=0 Oct 02 19:24:09 crc kubenswrapper[4832]: I1002 19:24:09.453525 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knvkv" event={"ID":"47ad5a9d-d3c6-4300-b054-11adcb392e9c","Type":"ContainerDied","Data":"3d3aab96453732f83e8779a7fb3145b6447f53381b08905a82414f1b60c0846f"} Oct 02 19:24:09 crc kubenswrapper[4832]: I1002 19:24:09.480420 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lsmvc" podStartSLOduration=1.778109133 podStartE2EDuration="6.480397452s" podCreationTimestamp="2025-10-02 19:24:03 +0000 UTC" firstStartedPulling="2025-10-02 19:24:04.356726262 +0000 UTC m=+3801.326169174" lastFinishedPulling="2025-10-02 19:24:09.059014581 +0000 UTC m=+3806.028457493" observedRunningTime="2025-10-02 19:24:09.4752065 +0000 UTC m=+3806.444649392" watchObservedRunningTime="2025-10-02 19:24:09.480397452 +0000 UTC m=+3806.449840324" Oct 02 19:24:10 crc kubenswrapper[4832]: I1002 19:24:10.477973 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knvkv" event={"ID":"47ad5a9d-d3c6-4300-b054-11adcb392e9c","Type":"ContainerStarted","Data":"8084aaac938896d2b0eae03dc3786bb0c808879821674b58eda6c3664da4d4c9"} Oct 02 19:24:10 crc kubenswrapper[4832]: I1002 19:24:10.515739 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-knvkv" podStartSLOduration=2.846582627 podStartE2EDuration="9.515707736s" podCreationTimestamp="2025-10-02 19:24:01 +0000 UTC" firstStartedPulling="2025-10-02 19:24:03.329249 +0000 UTC m=+3800.298691872" lastFinishedPulling="2025-10-02 19:24:09.998374099 +0000 UTC m=+3806.967816981" observedRunningTime="2025-10-02 19:24:10.509643747 +0000 UTC m=+3807.479086619" watchObservedRunningTime="2025-10-02 19:24:10.515707736 +0000 UTC m=+3807.485150648" Oct 02 19:24:12 crc kubenswrapper[4832]: I1002 19:24:12.086667 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:12 crc kubenswrapper[4832]: I1002 19:24:12.087388 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:13 crc kubenswrapper[4832]: I1002 19:24:13.150577 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-knvkv" podUID="47ad5a9d-d3c6-4300-b054-11adcb392e9c" containerName="registry-server" probeResult="failure" output=< Oct 02 19:24:13 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 19:24:13 crc kubenswrapper[4832]: > Oct 02 19:24:13 crc kubenswrapper[4832]: I1002 19:24:13.471537 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:13 crc kubenswrapper[4832]: I1002 19:24:13.471581 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:14 crc kubenswrapper[4832]: I1002 19:24:14.523575 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-lsmvc" podUID="5e68e565-7045-4e5e-bcb3-e2bf87145d51" containerName="registry-server" probeResult="failure" output=< Oct 02 19:24:14 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 19:24:14 crc kubenswrapper[4832]: > Oct 02 19:24:23 crc kubenswrapper[4832]: I1002 19:24:23.147911 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-knvkv" podUID="47ad5a9d-d3c6-4300-b054-11adcb392e9c" containerName="registry-server" probeResult="failure" output=< Oct 02 19:24:23 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 19:24:23 crc kubenswrapper[4832]: > Oct 02 19:24:23 crc kubenswrapper[4832]: I1002 19:24:23.544719 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:23 crc kubenswrapper[4832]: I1002 19:24:23.620091 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:23 crc kubenswrapper[4832]: I1002 19:24:23.791638 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lsmvc"] Oct 02 19:24:24 crc kubenswrapper[4832]: I1002 19:24:24.659224 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lsmvc" podUID="5e68e565-7045-4e5e-bcb3-e2bf87145d51" containerName="registry-server" containerID="cri-o://4c760e7d00391d5b792f684ca82c4a9c8832c07dbcd6e7de93b215cc00df3c55" gracePeriod=2 Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.320839 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.401495 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v89s7\" (UniqueName: \"kubernetes.io/projected/5e68e565-7045-4e5e-bcb3-e2bf87145d51-kube-api-access-v89s7\") pod \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\" (UID: \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\") " Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.401571 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e68e565-7045-4e5e-bcb3-e2bf87145d51-utilities\") pod \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\" (UID: \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\") " Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.401798 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e68e565-7045-4e5e-bcb3-e2bf87145d51-catalog-content\") pod \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\" (UID: \"5e68e565-7045-4e5e-bcb3-e2bf87145d51\") " Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.403047 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e68e565-7045-4e5e-bcb3-e2bf87145d51-utilities" (OuterVolumeSpecName: "utilities") pod "5e68e565-7045-4e5e-bcb3-e2bf87145d51" (UID: "5e68e565-7045-4e5e-bcb3-e2bf87145d51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.408989 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e68e565-7045-4e5e-bcb3-e2bf87145d51-kube-api-access-v89s7" (OuterVolumeSpecName: "kube-api-access-v89s7") pod "5e68e565-7045-4e5e-bcb3-e2bf87145d51" (UID: "5e68e565-7045-4e5e-bcb3-e2bf87145d51"). InnerVolumeSpecName "kube-api-access-v89s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.450146 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e68e565-7045-4e5e-bcb3-e2bf87145d51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e68e565-7045-4e5e-bcb3-e2bf87145d51" (UID: "5e68e565-7045-4e5e-bcb3-e2bf87145d51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.504672 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e68e565-7045-4e5e-bcb3-e2bf87145d51-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.504936 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v89s7\" (UniqueName: \"kubernetes.io/projected/5e68e565-7045-4e5e-bcb3-e2bf87145d51-kube-api-access-v89s7\") on node \"crc\" DevicePath \"\"" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.504951 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e68e565-7045-4e5e-bcb3-e2bf87145d51-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.674246 4832 generic.go:334] "Generic (PLEG): container finished" podID="5e68e565-7045-4e5e-bcb3-e2bf87145d51" containerID="4c760e7d00391d5b792f684ca82c4a9c8832c07dbcd6e7de93b215cc00df3c55" exitCode=0 Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.674308 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsmvc" event={"ID":"5e68e565-7045-4e5e-bcb3-e2bf87145d51","Type":"ContainerDied","Data":"4c760e7d00391d5b792f684ca82c4a9c8832c07dbcd6e7de93b215cc00df3c55"} Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.674348 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsmvc" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.675645 4832 scope.go:117] "RemoveContainer" containerID="4c760e7d00391d5b792f684ca82c4a9c8832c07dbcd6e7de93b215cc00df3c55" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.675612 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsmvc" event={"ID":"5e68e565-7045-4e5e-bcb3-e2bf87145d51","Type":"ContainerDied","Data":"1889c933eee07d4a4b5809c1779eac102b8ef82f3f4d9e8fccb9612f8486ce24"} Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.712000 4832 scope.go:117] "RemoveContainer" containerID="3bdcd0e2b7d02cf37210b42d1138b2b3addad07840af28a563b055bea7c06116" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.716753 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lsmvc"] Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.738195 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lsmvc"] Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.746945 4832 scope.go:117] "RemoveContainer" containerID="6583fc6c6f1a3be0b9d6d2ca46e97421d2326b2a3a1ed1b5cce99028b8867bf3" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.795064 4832 scope.go:117] "RemoveContainer" containerID="4c760e7d00391d5b792f684ca82c4a9c8832c07dbcd6e7de93b215cc00df3c55" Oct 02 19:24:25 crc kubenswrapper[4832]: E1002 19:24:25.795517 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c760e7d00391d5b792f684ca82c4a9c8832c07dbcd6e7de93b215cc00df3c55\": container with ID starting with 4c760e7d00391d5b792f684ca82c4a9c8832c07dbcd6e7de93b215cc00df3c55 not found: ID does not exist" containerID="4c760e7d00391d5b792f684ca82c4a9c8832c07dbcd6e7de93b215cc00df3c55" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.795561 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c760e7d00391d5b792f684ca82c4a9c8832c07dbcd6e7de93b215cc00df3c55"} err="failed to get container status \"4c760e7d00391d5b792f684ca82c4a9c8832c07dbcd6e7de93b215cc00df3c55\": rpc error: code = NotFound desc = could not find container \"4c760e7d00391d5b792f684ca82c4a9c8832c07dbcd6e7de93b215cc00df3c55\": container with ID starting with 4c760e7d00391d5b792f684ca82c4a9c8832c07dbcd6e7de93b215cc00df3c55 not found: ID does not exist" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.795588 4832 scope.go:117] "RemoveContainer" containerID="3bdcd0e2b7d02cf37210b42d1138b2b3addad07840af28a563b055bea7c06116" Oct 02 19:24:25 crc kubenswrapper[4832]: E1002 19:24:25.795935 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bdcd0e2b7d02cf37210b42d1138b2b3addad07840af28a563b055bea7c06116\": container with ID starting with 3bdcd0e2b7d02cf37210b42d1138b2b3addad07840af28a563b055bea7c06116 not found: ID does not exist" containerID="3bdcd0e2b7d02cf37210b42d1138b2b3addad07840af28a563b055bea7c06116" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.795973 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bdcd0e2b7d02cf37210b42d1138b2b3addad07840af28a563b055bea7c06116"} err="failed to get container status \"3bdcd0e2b7d02cf37210b42d1138b2b3addad07840af28a563b055bea7c06116\": rpc error: code = NotFound desc = could not find container \"3bdcd0e2b7d02cf37210b42d1138b2b3addad07840af28a563b055bea7c06116\": container with ID starting with 3bdcd0e2b7d02cf37210b42d1138b2b3addad07840af28a563b055bea7c06116 not found: ID does not exist" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.795998 4832 scope.go:117] "RemoveContainer" containerID="6583fc6c6f1a3be0b9d6d2ca46e97421d2326b2a3a1ed1b5cce99028b8867bf3" Oct 02 19:24:25 crc kubenswrapper[4832]: E1002 19:24:25.796291 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6583fc6c6f1a3be0b9d6d2ca46e97421d2326b2a3a1ed1b5cce99028b8867bf3\": container with ID starting with 6583fc6c6f1a3be0b9d6d2ca46e97421d2326b2a3a1ed1b5cce99028b8867bf3 not found: ID does not exist" containerID="6583fc6c6f1a3be0b9d6d2ca46e97421d2326b2a3a1ed1b5cce99028b8867bf3" Oct 02 19:24:25 crc kubenswrapper[4832]: I1002 19:24:25.796316 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6583fc6c6f1a3be0b9d6d2ca46e97421d2326b2a3a1ed1b5cce99028b8867bf3"} err="failed to get container status \"6583fc6c6f1a3be0b9d6d2ca46e97421d2326b2a3a1ed1b5cce99028b8867bf3\": rpc error: code = NotFound desc = could not find container \"6583fc6c6f1a3be0b9d6d2ca46e97421d2326b2a3a1ed1b5cce99028b8867bf3\": container with ID starting with 6583fc6c6f1a3be0b9d6d2ca46e97421d2326b2a3a1ed1b5cce99028b8867bf3 not found: ID does not exist" Oct 02 19:24:26 crc kubenswrapper[4832]: I1002 19:24:26.876094 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:24:26 crc kubenswrapper[4832]: I1002 19:24:26.877667 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:24:27 crc kubenswrapper[4832]: I1002 19:24:27.259834 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e68e565-7045-4e5e-bcb3-e2bf87145d51" path="/var/lib/kubelet/pods/5e68e565-7045-4e5e-bcb3-e2bf87145d51/volumes" Oct 02 19:24:32 crc kubenswrapper[4832]: I1002 19:24:32.155109 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:32 crc kubenswrapper[4832]: I1002 19:24:32.231506 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:32 crc kubenswrapper[4832]: I1002 19:24:32.938186 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-knvkv"] Oct 02 19:24:33 crc kubenswrapper[4832]: I1002 19:24:33.789224 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-knvkv" podUID="47ad5a9d-d3c6-4300-b054-11adcb392e9c" containerName="registry-server" containerID="cri-o://8084aaac938896d2b0eae03dc3786bb0c808879821674b58eda6c3664da4d4c9" gracePeriod=2 Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.427813 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.548793 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmjwv\" (UniqueName: \"kubernetes.io/projected/47ad5a9d-d3c6-4300-b054-11adcb392e9c-kube-api-access-nmjwv\") pod \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\" (UID: \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\") " Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.548915 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ad5a9d-d3c6-4300-b054-11adcb392e9c-utilities\") pod \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\" (UID: \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\") " Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.549045 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ad5a9d-d3c6-4300-b054-11adcb392e9c-catalog-content\") pod \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\" (UID: \"47ad5a9d-d3c6-4300-b054-11adcb392e9c\") " Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.550083 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47ad5a9d-d3c6-4300-b054-11adcb392e9c-utilities" (OuterVolumeSpecName: "utilities") pod "47ad5a9d-d3c6-4300-b054-11adcb392e9c" (UID: "47ad5a9d-d3c6-4300-b054-11adcb392e9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.556634 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47ad5a9d-d3c6-4300-b054-11adcb392e9c-kube-api-access-nmjwv" (OuterVolumeSpecName: "kube-api-access-nmjwv") pod "47ad5a9d-d3c6-4300-b054-11adcb392e9c" (UID: "47ad5a9d-d3c6-4300-b054-11adcb392e9c"). InnerVolumeSpecName "kube-api-access-nmjwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.652764 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmjwv\" (UniqueName: \"kubernetes.io/projected/47ad5a9d-d3c6-4300-b054-11adcb392e9c-kube-api-access-nmjwv\") on node \"crc\" DevicePath \"\"" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.652804 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47ad5a9d-d3c6-4300-b054-11adcb392e9c-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.687836 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47ad5a9d-d3c6-4300-b054-11adcb392e9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47ad5a9d-d3c6-4300-b054-11adcb392e9c" (UID: "47ad5a9d-d3c6-4300-b054-11adcb392e9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.755950 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47ad5a9d-d3c6-4300-b054-11adcb392e9c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.808536 4832 generic.go:334] "Generic (PLEG): container finished" podID="47ad5a9d-d3c6-4300-b054-11adcb392e9c" containerID="8084aaac938896d2b0eae03dc3786bb0c808879821674b58eda6c3664da4d4c9" exitCode=0 Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.808605 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knvkv" event={"ID":"47ad5a9d-d3c6-4300-b054-11adcb392e9c","Type":"ContainerDied","Data":"8084aaac938896d2b0eae03dc3786bb0c808879821674b58eda6c3664da4d4c9"} Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.808647 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knvkv" event={"ID":"47ad5a9d-d3c6-4300-b054-11adcb392e9c","Type":"ContainerDied","Data":"a390787c0f6c03a7f1aef63cc44c2d23d89b011517ce743bd476b1bede989eb5"} Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.808677 4832 scope.go:117] "RemoveContainer" containerID="8084aaac938896d2b0eae03dc3786bb0c808879821674b58eda6c3664da4d4c9" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.808612 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knvkv" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.836333 4832 scope.go:117] "RemoveContainer" containerID="3d3aab96453732f83e8779a7fb3145b6447f53381b08905a82414f1b60c0846f" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.874922 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-knvkv"] Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.892181 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-knvkv"] Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.905088 4832 scope.go:117] "RemoveContainer" containerID="b34179f9fb05824b744d75cbf4d0142a03f30ddfaf27ebc67474735a8983e586" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.936683 4832 scope.go:117] "RemoveContainer" containerID="8084aaac938896d2b0eae03dc3786bb0c808879821674b58eda6c3664da4d4c9" Oct 02 19:24:34 crc kubenswrapper[4832]: E1002 19:24:34.937234 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8084aaac938896d2b0eae03dc3786bb0c808879821674b58eda6c3664da4d4c9\": container with ID starting with 8084aaac938896d2b0eae03dc3786bb0c808879821674b58eda6c3664da4d4c9 not found: ID does not exist" containerID="8084aaac938896d2b0eae03dc3786bb0c808879821674b58eda6c3664da4d4c9" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.937311 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8084aaac938896d2b0eae03dc3786bb0c808879821674b58eda6c3664da4d4c9"} err="failed to get container status \"8084aaac938896d2b0eae03dc3786bb0c808879821674b58eda6c3664da4d4c9\": rpc error: code = NotFound desc = could not find container \"8084aaac938896d2b0eae03dc3786bb0c808879821674b58eda6c3664da4d4c9\": container with ID starting with 8084aaac938896d2b0eae03dc3786bb0c808879821674b58eda6c3664da4d4c9 not found: ID does not exist" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.937352 4832 scope.go:117] "RemoveContainer" containerID="3d3aab96453732f83e8779a7fb3145b6447f53381b08905a82414f1b60c0846f" Oct 02 19:24:34 crc kubenswrapper[4832]: E1002 19:24:34.937787 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d3aab96453732f83e8779a7fb3145b6447f53381b08905a82414f1b60c0846f\": container with ID starting with 3d3aab96453732f83e8779a7fb3145b6447f53381b08905a82414f1b60c0846f not found: ID does not exist" containerID="3d3aab96453732f83e8779a7fb3145b6447f53381b08905a82414f1b60c0846f" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.937821 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d3aab96453732f83e8779a7fb3145b6447f53381b08905a82414f1b60c0846f"} err="failed to get container status \"3d3aab96453732f83e8779a7fb3145b6447f53381b08905a82414f1b60c0846f\": rpc error: code = NotFound desc = could not find container \"3d3aab96453732f83e8779a7fb3145b6447f53381b08905a82414f1b60c0846f\": container with ID starting with 3d3aab96453732f83e8779a7fb3145b6447f53381b08905a82414f1b60c0846f not found: ID does not exist" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.937849 4832 scope.go:117] "RemoveContainer" containerID="b34179f9fb05824b744d75cbf4d0142a03f30ddfaf27ebc67474735a8983e586" Oct 02 19:24:34 crc kubenswrapper[4832]: E1002 19:24:34.938233 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b34179f9fb05824b744d75cbf4d0142a03f30ddfaf27ebc67474735a8983e586\": container with ID starting with b34179f9fb05824b744d75cbf4d0142a03f30ddfaf27ebc67474735a8983e586 not found: ID does not exist" containerID="b34179f9fb05824b744d75cbf4d0142a03f30ddfaf27ebc67474735a8983e586" Oct 02 19:24:34 crc kubenswrapper[4832]: I1002 19:24:34.938292 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b34179f9fb05824b744d75cbf4d0142a03f30ddfaf27ebc67474735a8983e586"} err="failed to get container status \"b34179f9fb05824b744d75cbf4d0142a03f30ddfaf27ebc67474735a8983e586\": rpc error: code = NotFound desc = could not find container \"b34179f9fb05824b744d75cbf4d0142a03f30ddfaf27ebc67474735a8983e586\": container with ID starting with b34179f9fb05824b744d75cbf4d0142a03f30ddfaf27ebc67474735a8983e586 not found: ID does not exist" Oct 02 19:24:35 crc kubenswrapper[4832]: I1002 19:24:35.238183 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47ad5a9d-d3c6-4300-b054-11adcb392e9c" path="/var/lib/kubelet/pods/47ad5a9d-d3c6-4300-b054-11adcb392e9c/volumes" Oct 02 19:24:56 crc kubenswrapper[4832]: I1002 19:24:56.875677 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:24:56 crc kubenswrapper[4832]: I1002 19:24:56.876313 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:24:56 crc kubenswrapper[4832]: I1002 19:24:56.876382 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 19:24:56 crc kubenswrapper[4832]: I1002 19:24:56.877756 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 19:24:56 crc kubenswrapper[4832]: I1002 19:24:56.877852 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" gracePeriod=600 Oct 02 19:24:57 crc kubenswrapper[4832]: E1002 19:24:57.003377 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:24:57 crc kubenswrapper[4832]: I1002 19:24:57.128447 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" exitCode=0 Oct 02 19:24:57 crc kubenswrapper[4832]: I1002 19:24:57.128525 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17"} Oct 02 19:24:57 crc kubenswrapper[4832]: I1002 19:24:57.128600 4832 scope.go:117] "RemoveContainer" containerID="5c75447414fde7d9ee4905d41818ed0cf89bd7f0d47454db8b2a2692d457c8df" Oct 02 19:24:57 crc kubenswrapper[4832]: I1002 19:24:57.129365 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:24:57 crc kubenswrapper[4832]: E1002 19:24:57.129785 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:25:11 crc kubenswrapper[4832]: I1002 19:25:11.223025 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:25:11 crc kubenswrapper[4832]: E1002 19:25:11.223728 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:25:25 crc kubenswrapper[4832]: I1002 19:25:25.233214 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:25:25 crc kubenswrapper[4832]: E1002 19:25:25.234171 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:25:36 crc kubenswrapper[4832]: I1002 19:25:36.223460 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:25:36 crc kubenswrapper[4832]: E1002 19:25:36.224131 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:25:50 crc kubenswrapper[4832]: I1002 19:25:50.222914 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:25:50 crc kubenswrapper[4832]: E1002 19:25:50.224206 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:26:01 crc kubenswrapper[4832]: I1002 19:26:01.223834 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:26:01 crc kubenswrapper[4832]: E1002 19:26:01.224625 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:26:12 crc kubenswrapper[4832]: I1002 19:26:12.223237 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:26:12 crc kubenswrapper[4832]: E1002 19:26:12.224881 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:26:27 crc kubenswrapper[4832]: I1002 19:26:27.225051 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:26:27 crc kubenswrapper[4832]: E1002 19:26:27.225750 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:26:38 crc kubenswrapper[4832]: I1002 19:26:38.222591 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:26:38 crc kubenswrapper[4832]: E1002 19:26:38.223305 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:26:49 crc kubenswrapper[4832]: E1002 19:26:49.039097 4832 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.180:40484->38.102.83.180:36377: write tcp 38.102.83.180:40484->38.102.83.180:36377: write: connection reset by peer Oct 02 19:26:53 crc kubenswrapper[4832]: I1002 19:26:53.225334 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:26:53 crc kubenswrapper[4832]: E1002 19:26:53.226540 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:27:08 crc kubenswrapper[4832]: I1002 19:27:08.223797 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:27:08 crc kubenswrapper[4832]: E1002 19:27:08.225086 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:27:21 crc kubenswrapper[4832]: I1002 19:27:21.223906 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:27:21 crc kubenswrapper[4832]: E1002 19:27:21.225192 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:27:35 crc kubenswrapper[4832]: I1002 19:27:35.223799 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:27:35 crc kubenswrapper[4832]: E1002 19:27:35.225016 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:27:46 crc kubenswrapper[4832]: I1002 19:27:46.223766 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:27:46 crc kubenswrapper[4832]: E1002 19:27:46.224772 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:28:01 crc kubenswrapper[4832]: I1002 19:28:01.223982 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:28:01 crc kubenswrapper[4832]: E1002 19:28:01.224793 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:28:12 crc kubenswrapper[4832]: I1002 19:28:12.224170 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:28:12 crc kubenswrapper[4832]: E1002 19:28:12.225213 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:28:25 crc kubenswrapper[4832]: I1002 19:28:25.263908 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:28:25 crc kubenswrapper[4832]: E1002 19:28:25.265775 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:28:35 crc kubenswrapper[4832]: I1002 19:28:35.860820 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hqqbj"] Oct 02 19:28:35 crc kubenswrapper[4832]: E1002 19:28:35.862380 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e68e565-7045-4e5e-bcb3-e2bf87145d51" containerName="extract-content" Oct 02 19:28:35 crc kubenswrapper[4832]: I1002 19:28:35.862405 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e68e565-7045-4e5e-bcb3-e2bf87145d51" containerName="extract-content" Oct 02 19:28:35 crc kubenswrapper[4832]: E1002 19:28:35.862455 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e68e565-7045-4e5e-bcb3-e2bf87145d51" containerName="extract-utilities" Oct 02 19:28:35 crc kubenswrapper[4832]: I1002 19:28:35.862468 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e68e565-7045-4e5e-bcb3-e2bf87145d51" containerName="extract-utilities" Oct 02 19:28:35 crc kubenswrapper[4832]: E1002 19:28:35.862489 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ad5a9d-d3c6-4300-b054-11adcb392e9c" containerName="registry-server" Oct 02 19:28:35 crc kubenswrapper[4832]: I1002 19:28:35.862502 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ad5a9d-d3c6-4300-b054-11adcb392e9c" containerName="registry-server" Oct 02 19:28:35 crc kubenswrapper[4832]: E1002 19:28:35.862541 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ad5a9d-d3c6-4300-b054-11adcb392e9c" containerName="extract-content" Oct 02 19:28:35 crc kubenswrapper[4832]: I1002 19:28:35.862553 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ad5a9d-d3c6-4300-b054-11adcb392e9c" containerName="extract-content" Oct 02 19:28:35 crc kubenswrapper[4832]: E1002 19:28:35.862579 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ad5a9d-d3c6-4300-b054-11adcb392e9c" containerName="extract-utilities" Oct 02 19:28:35 crc kubenswrapper[4832]: I1002 19:28:35.862590 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ad5a9d-d3c6-4300-b054-11adcb392e9c" containerName="extract-utilities" Oct 02 19:28:35 crc kubenswrapper[4832]: E1002 19:28:35.862624 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e68e565-7045-4e5e-bcb3-e2bf87145d51" containerName="registry-server" Oct 02 19:28:35 crc kubenswrapper[4832]: I1002 19:28:35.862635 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e68e565-7045-4e5e-bcb3-e2bf87145d51" containerName="registry-server" Oct 02 19:28:35 crc kubenswrapper[4832]: I1002 19:28:35.863001 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e68e565-7045-4e5e-bcb3-e2bf87145d51" containerName="registry-server" Oct 02 19:28:35 crc kubenswrapper[4832]: I1002 19:28:35.863101 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="47ad5a9d-d3c6-4300-b054-11adcb392e9c" containerName="registry-server" Oct 02 19:28:35 crc kubenswrapper[4832]: I1002 19:28:35.869152 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:35 crc kubenswrapper[4832]: I1002 19:28:35.924453 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqqbj"] Oct 02 19:28:36 crc kubenswrapper[4832]: I1002 19:28:36.018436 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e116f154-c5cb-480d-b397-1cd848496e21-utilities\") pod \"community-operators-hqqbj\" (UID: \"e116f154-c5cb-480d-b397-1cd848496e21\") " pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:36 crc kubenswrapper[4832]: I1002 19:28:36.018483 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hkhv\" (UniqueName: \"kubernetes.io/projected/e116f154-c5cb-480d-b397-1cd848496e21-kube-api-access-5hkhv\") pod \"community-operators-hqqbj\" (UID: \"e116f154-c5cb-480d-b397-1cd848496e21\") " pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:36 crc kubenswrapper[4832]: I1002 19:28:36.018959 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e116f154-c5cb-480d-b397-1cd848496e21-catalog-content\") pod \"community-operators-hqqbj\" (UID: \"e116f154-c5cb-480d-b397-1cd848496e21\") " pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:36 crc kubenswrapper[4832]: I1002 19:28:36.121456 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e116f154-c5cb-480d-b397-1cd848496e21-utilities\") pod \"community-operators-hqqbj\" (UID: \"e116f154-c5cb-480d-b397-1cd848496e21\") " pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:36 crc kubenswrapper[4832]: I1002 19:28:36.121820 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hkhv\" (UniqueName: \"kubernetes.io/projected/e116f154-c5cb-480d-b397-1cd848496e21-kube-api-access-5hkhv\") pod \"community-operators-hqqbj\" (UID: \"e116f154-c5cb-480d-b397-1cd848496e21\") " pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:36 crc kubenswrapper[4832]: I1002 19:28:36.121901 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e116f154-c5cb-480d-b397-1cd848496e21-utilities\") pod \"community-operators-hqqbj\" (UID: \"e116f154-c5cb-480d-b397-1cd848496e21\") " pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:36 crc kubenswrapper[4832]: I1002 19:28:36.122033 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e116f154-c5cb-480d-b397-1cd848496e21-catalog-content\") pod \"community-operators-hqqbj\" (UID: \"e116f154-c5cb-480d-b397-1cd848496e21\") " pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:36 crc kubenswrapper[4832]: I1002 19:28:36.122352 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e116f154-c5cb-480d-b397-1cd848496e21-catalog-content\") pod \"community-operators-hqqbj\" (UID: \"e116f154-c5cb-480d-b397-1cd848496e21\") " pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:36 crc kubenswrapper[4832]: I1002 19:28:36.141178 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hkhv\" (UniqueName: \"kubernetes.io/projected/e116f154-c5cb-480d-b397-1cd848496e21-kube-api-access-5hkhv\") pod \"community-operators-hqqbj\" (UID: \"e116f154-c5cb-480d-b397-1cd848496e21\") " pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:36 crc kubenswrapper[4832]: I1002 19:28:36.233659 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:36 crc kubenswrapper[4832]: I1002 19:28:36.820547 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqqbj"] Oct 02 19:28:37 crc kubenswrapper[4832]: I1002 19:28:37.064026 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqqbj" event={"ID":"e116f154-c5cb-480d-b397-1cd848496e21","Type":"ContainerStarted","Data":"c5fe6db6dbd67343bede8846cf8cc4986b720ada65506a2d8ff40151c164cda5"} Oct 02 19:28:37 crc kubenswrapper[4832]: I1002 19:28:37.065550 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqqbj" event={"ID":"e116f154-c5cb-480d-b397-1cd848496e21","Type":"ContainerStarted","Data":"9449719b515a7957710f2f771e90ce75596b73660069e0e4103efac5454d9f84"} Oct 02 19:28:38 crc kubenswrapper[4832]: I1002 19:28:38.079318 4832 generic.go:334] "Generic (PLEG): container finished" podID="e116f154-c5cb-480d-b397-1cd848496e21" containerID="c5fe6db6dbd67343bede8846cf8cc4986b720ada65506a2d8ff40151c164cda5" exitCode=0 Oct 02 19:28:38 crc kubenswrapper[4832]: I1002 19:28:38.079371 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqqbj" event={"ID":"e116f154-c5cb-480d-b397-1cd848496e21","Type":"ContainerDied","Data":"c5fe6db6dbd67343bede8846cf8cc4986b720ada65506a2d8ff40151c164cda5"} Oct 02 19:28:38 crc kubenswrapper[4832]: I1002 19:28:38.082027 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 19:28:38 crc kubenswrapper[4832]: I1002 19:28:38.223095 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:28:38 crc kubenswrapper[4832]: E1002 19:28:38.223453 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:28:42 crc kubenswrapper[4832]: I1002 19:28:42.131781 4832 generic.go:334] "Generic (PLEG): container finished" podID="e116f154-c5cb-480d-b397-1cd848496e21" containerID="4a80df66028cad098174803c055ce97f5c2656392220b513f0913ca7040c41df" exitCode=0 Oct 02 19:28:42 crc kubenswrapper[4832]: I1002 19:28:42.131883 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqqbj" event={"ID":"e116f154-c5cb-480d-b397-1cd848496e21","Type":"ContainerDied","Data":"4a80df66028cad098174803c055ce97f5c2656392220b513f0913ca7040c41df"} Oct 02 19:28:43 crc kubenswrapper[4832]: I1002 19:28:43.148950 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqqbj" event={"ID":"e116f154-c5cb-480d-b397-1cd848496e21","Type":"ContainerStarted","Data":"2d0b303978a3ffc7c3490a852809f916d93a81539b2795b6a60a16d540fcd350"} Oct 02 19:28:43 crc kubenswrapper[4832]: I1002 19:28:43.176802 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hqqbj" podStartSLOduration=3.580651791 podStartE2EDuration="8.176776589s" podCreationTimestamp="2025-10-02 19:28:35 +0000 UTC" firstStartedPulling="2025-10-02 19:28:38.081769087 +0000 UTC m=+4075.051211959" lastFinishedPulling="2025-10-02 19:28:42.677893865 +0000 UTC m=+4079.647336757" observedRunningTime="2025-10-02 19:28:43.163930211 +0000 UTC m=+4080.133373073" watchObservedRunningTime="2025-10-02 19:28:43.176776589 +0000 UTC m=+4080.146219461" Oct 02 19:28:46 crc kubenswrapper[4832]: I1002 19:28:46.234765 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:46 crc kubenswrapper[4832]: I1002 19:28:46.235307 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:46 crc kubenswrapper[4832]: I1002 19:28:46.317491 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:47 crc kubenswrapper[4832]: I1002 19:28:47.265630 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hqqbj" Oct 02 19:28:47 crc kubenswrapper[4832]: I1002 19:28:47.392643 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqqbj"] Oct 02 19:28:47 crc kubenswrapper[4832]: I1002 19:28:47.448280 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m75rn"] Oct 02 19:28:47 crc kubenswrapper[4832]: I1002 19:28:47.448561 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m75rn" podUID="65d8bf7b-df8d-4dba-a578-101604e1b479" containerName="registry-server" containerID="cri-o://2bac18275e96de60d5b330f227d312c1632cdb26e883146ffb36c025e20f5259" gracePeriod=2 Oct 02 19:28:47 crc kubenswrapper[4832]: I1002 19:28:47.997837 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m75rn" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.123252 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwzbl\" (UniqueName: \"kubernetes.io/projected/65d8bf7b-df8d-4dba-a578-101604e1b479-kube-api-access-qwzbl\") pod \"65d8bf7b-df8d-4dba-a578-101604e1b479\" (UID: \"65d8bf7b-df8d-4dba-a578-101604e1b479\") " Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.123465 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65d8bf7b-df8d-4dba-a578-101604e1b479-utilities\") pod \"65d8bf7b-df8d-4dba-a578-101604e1b479\" (UID: \"65d8bf7b-df8d-4dba-a578-101604e1b479\") " Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.123639 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65d8bf7b-df8d-4dba-a578-101604e1b479-catalog-content\") pod \"65d8bf7b-df8d-4dba-a578-101604e1b479\" (UID: \"65d8bf7b-df8d-4dba-a578-101604e1b479\") " Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.133830 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65d8bf7b-df8d-4dba-a578-101604e1b479-kube-api-access-qwzbl" (OuterVolumeSpecName: "kube-api-access-qwzbl") pod "65d8bf7b-df8d-4dba-a578-101604e1b479" (UID: "65d8bf7b-df8d-4dba-a578-101604e1b479"). InnerVolumeSpecName "kube-api-access-qwzbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.139175 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65d8bf7b-df8d-4dba-a578-101604e1b479-utilities" (OuterVolumeSpecName: "utilities") pod "65d8bf7b-df8d-4dba-a578-101604e1b479" (UID: "65d8bf7b-df8d-4dba-a578-101604e1b479"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.220194 4832 generic.go:334] "Generic (PLEG): container finished" podID="65d8bf7b-df8d-4dba-a578-101604e1b479" containerID="2bac18275e96de60d5b330f227d312c1632cdb26e883146ffb36c025e20f5259" exitCode=0 Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.220245 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m75rn" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.220313 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m75rn" event={"ID":"65d8bf7b-df8d-4dba-a578-101604e1b479","Type":"ContainerDied","Data":"2bac18275e96de60d5b330f227d312c1632cdb26e883146ffb36c025e20f5259"} Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.220347 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m75rn" event={"ID":"65d8bf7b-df8d-4dba-a578-101604e1b479","Type":"ContainerDied","Data":"9ab07768e8b05c4c53cde7b2f4e37e18cefc18d45b4af6ad95ae9757080cd6b4"} Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.220364 4832 scope.go:117] "RemoveContainer" containerID="2bac18275e96de60d5b330f227d312c1632cdb26e883146ffb36c025e20f5259" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.225757 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwzbl\" (UniqueName: \"kubernetes.io/projected/65d8bf7b-df8d-4dba-a578-101604e1b479-kube-api-access-qwzbl\") on node \"crc\" DevicePath \"\"" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.225955 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65d8bf7b-df8d-4dba-a578-101604e1b479-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.252578 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65d8bf7b-df8d-4dba-a578-101604e1b479-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65d8bf7b-df8d-4dba-a578-101604e1b479" (UID: "65d8bf7b-df8d-4dba-a578-101604e1b479"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.279792 4832 scope.go:117] "RemoveContainer" containerID="9d9809db0d57c0813b61b98bd4d1d9afb212fb74c915916ceae5f868efd3a0c0" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.329565 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65d8bf7b-df8d-4dba-a578-101604e1b479-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.335531 4832 scope.go:117] "RemoveContainer" containerID="47043616b87ab0cb3de5ee81c56cff779bdd3ed71b93139a851623c7ff4c6106" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.382240 4832 scope.go:117] "RemoveContainer" containerID="2bac18275e96de60d5b330f227d312c1632cdb26e883146ffb36c025e20f5259" Oct 02 19:28:48 crc kubenswrapper[4832]: E1002 19:28:48.382635 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bac18275e96de60d5b330f227d312c1632cdb26e883146ffb36c025e20f5259\": container with ID starting with 2bac18275e96de60d5b330f227d312c1632cdb26e883146ffb36c025e20f5259 not found: ID does not exist" containerID="2bac18275e96de60d5b330f227d312c1632cdb26e883146ffb36c025e20f5259" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.382673 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bac18275e96de60d5b330f227d312c1632cdb26e883146ffb36c025e20f5259"} err="failed to get container status \"2bac18275e96de60d5b330f227d312c1632cdb26e883146ffb36c025e20f5259\": rpc error: code = NotFound desc = could not find container \"2bac18275e96de60d5b330f227d312c1632cdb26e883146ffb36c025e20f5259\": container with ID starting with 2bac18275e96de60d5b330f227d312c1632cdb26e883146ffb36c025e20f5259 not found: ID does not exist" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.382698 4832 scope.go:117] "RemoveContainer" containerID="9d9809db0d57c0813b61b98bd4d1d9afb212fb74c915916ceae5f868efd3a0c0" Oct 02 19:28:48 crc kubenswrapper[4832]: E1002 19:28:48.383069 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d9809db0d57c0813b61b98bd4d1d9afb212fb74c915916ceae5f868efd3a0c0\": container with ID starting with 9d9809db0d57c0813b61b98bd4d1d9afb212fb74c915916ceae5f868efd3a0c0 not found: ID does not exist" containerID="9d9809db0d57c0813b61b98bd4d1d9afb212fb74c915916ceae5f868efd3a0c0" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.383093 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9809db0d57c0813b61b98bd4d1d9afb212fb74c915916ceae5f868efd3a0c0"} err="failed to get container status \"9d9809db0d57c0813b61b98bd4d1d9afb212fb74c915916ceae5f868efd3a0c0\": rpc error: code = NotFound desc = could not find container \"9d9809db0d57c0813b61b98bd4d1d9afb212fb74c915916ceae5f868efd3a0c0\": container with ID starting with 9d9809db0d57c0813b61b98bd4d1d9afb212fb74c915916ceae5f868efd3a0c0 not found: ID does not exist" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.383107 4832 scope.go:117] "RemoveContainer" containerID="47043616b87ab0cb3de5ee81c56cff779bdd3ed71b93139a851623c7ff4c6106" Oct 02 19:28:48 crc kubenswrapper[4832]: E1002 19:28:48.384648 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47043616b87ab0cb3de5ee81c56cff779bdd3ed71b93139a851623c7ff4c6106\": container with ID starting with 47043616b87ab0cb3de5ee81c56cff779bdd3ed71b93139a851623c7ff4c6106 not found: ID does not exist" containerID="47043616b87ab0cb3de5ee81c56cff779bdd3ed71b93139a851623c7ff4c6106" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.384675 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47043616b87ab0cb3de5ee81c56cff779bdd3ed71b93139a851623c7ff4c6106"} err="failed to get container status \"47043616b87ab0cb3de5ee81c56cff779bdd3ed71b93139a851623c7ff4c6106\": rpc error: code = NotFound desc = could not find container \"47043616b87ab0cb3de5ee81c56cff779bdd3ed71b93139a851623c7ff4c6106\": container with ID starting with 47043616b87ab0cb3de5ee81c56cff779bdd3ed71b93139a851623c7ff4c6106 not found: ID does not exist" Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.557310 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m75rn"] Oct 02 19:28:48 crc kubenswrapper[4832]: I1002 19:28:48.575089 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m75rn"] Oct 02 19:28:49 crc kubenswrapper[4832]: I1002 19:28:49.237023 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65d8bf7b-df8d-4dba-a578-101604e1b479" path="/var/lib/kubelet/pods/65d8bf7b-df8d-4dba-a578-101604e1b479/volumes" Oct 02 19:28:53 crc kubenswrapper[4832]: I1002 19:28:53.223797 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:28:53 crc kubenswrapper[4832]: E1002 19:28:53.225204 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:29:04 crc kubenswrapper[4832]: I1002 19:29:04.222494 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:29:04 crc kubenswrapper[4832]: E1002 19:29:04.223276 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:29:19 crc kubenswrapper[4832]: I1002 19:29:19.223080 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:29:19 crc kubenswrapper[4832]: E1002 19:29:19.223790 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:29:33 crc kubenswrapper[4832]: I1002 19:29:33.223390 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:29:33 crc kubenswrapper[4832]: E1002 19:29:33.224053 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:29:48 crc kubenswrapper[4832]: I1002 19:29:48.224308 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:29:48 crc kubenswrapper[4832]: E1002 19:29:48.225664 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.179340 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt"] Oct 02 19:30:00 crc kubenswrapper[4832]: E1002 19:30:00.180296 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d8bf7b-df8d-4dba-a578-101604e1b479" containerName="extract-content" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.180311 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d8bf7b-df8d-4dba-a578-101604e1b479" containerName="extract-content" Oct 02 19:30:00 crc kubenswrapper[4832]: E1002 19:30:00.180326 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d8bf7b-df8d-4dba-a578-101604e1b479" containerName="extract-utilities" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.180333 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d8bf7b-df8d-4dba-a578-101604e1b479" containerName="extract-utilities" Oct 02 19:30:00 crc kubenswrapper[4832]: E1002 19:30:00.180389 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d8bf7b-df8d-4dba-a578-101604e1b479" containerName="registry-server" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.180398 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d8bf7b-df8d-4dba-a578-101604e1b479" containerName="registry-server" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.180702 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d8bf7b-df8d-4dba-a578-101604e1b479" containerName="registry-server" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.181648 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.183914 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.184219 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.193649 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt"] Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.334247 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-config-volume\") pod \"collect-profiles-29323890-dvgnt\" (UID: \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.335074 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7t45\" (UniqueName: \"kubernetes.io/projected/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-kube-api-access-j7t45\") pod \"collect-profiles-29323890-dvgnt\" (UID: \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.335143 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-secret-volume\") pod \"collect-profiles-29323890-dvgnt\" (UID: \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.438709 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-config-volume\") pod \"collect-profiles-29323890-dvgnt\" (UID: \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.439294 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7t45\" (UniqueName: \"kubernetes.io/projected/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-kube-api-access-j7t45\") pod \"collect-profiles-29323890-dvgnt\" (UID: \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.439336 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-secret-volume\") pod \"collect-profiles-29323890-dvgnt\" (UID: \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.439991 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-config-volume\") pod \"collect-profiles-29323890-dvgnt\" (UID: \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.447493 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-secret-volume\") pod \"collect-profiles-29323890-dvgnt\" (UID: \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.463926 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7t45\" (UniqueName: \"kubernetes.io/projected/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-kube-api-access-j7t45\") pod \"collect-profiles-29323890-dvgnt\" (UID: \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" Oct 02 19:30:00 crc kubenswrapper[4832]: I1002 19:30:00.534674 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" Oct 02 19:30:01 crc kubenswrapper[4832]: I1002 19:30:01.030362 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt"] Oct 02 19:30:01 crc kubenswrapper[4832]: I1002 19:30:01.114782 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" event={"ID":"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830","Type":"ContainerStarted","Data":"036f765a1856363d38551a16c5a661e5d772322c31521e57a1e490bcd71327d3"} Oct 02 19:30:02 crc kubenswrapper[4832]: I1002 19:30:02.127631 4832 generic.go:334] "Generic (PLEG): container finished" podID="1ed559e0-37b1-4a74-9e5f-7fd37c0d5830" containerID="efc62151f3dd22a07cd27a500329b6c4771e0d1a4c03177ba81041451d107b8e" exitCode=0 Oct 02 19:30:02 crc kubenswrapper[4832]: I1002 19:30:02.127731 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" event={"ID":"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830","Type":"ContainerDied","Data":"efc62151f3dd22a07cd27a500329b6c4771e0d1a4c03177ba81041451d107b8e"} Oct 02 19:30:02 crc kubenswrapper[4832]: I1002 19:30:02.222807 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:30:03 crc kubenswrapper[4832]: I1002 19:30:03.145990 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"d5ed6964e6c71b0e0b79de8e00d22be8045ed670a30b9660848780294aa6bf2f"} Oct 02 19:30:03 crc kubenswrapper[4832]: I1002 19:30:03.810410 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" Oct 02 19:30:03 crc kubenswrapper[4832]: I1002 19:30:03.939412 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7t45\" (UniqueName: \"kubernetes.io/projected/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-kube-api-access-j7t45\") pod \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\" (UID: \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\") " Oct 02 19:30:03 crc kubenswrapper[4832]: I1002 19:30:03.939479 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-secret-volume\") pod \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\" (UID: \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\") " Oct 02 19:30:03 crc kubenswrapper[4832]: I1002 19:30:03.939755 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-config-volume\") pod \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\" (UID: \"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830\") " Oct 02 19:30:03 crc kubenswrapper[4832]: I1002 19:30:03.940960 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-config-volume" (OuterVolumeSpecName: "config-volume") pod "1ed559e0-37b1-4a74-9e5f-7fd37c0d5830" (UID: "1ed559e0-37b1-4a74-9e5f-7fd37c0d5830"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 19:30:03 crc kubenswrapper[4832]: I1002 19:30:03.948570 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1ed559e0-37b1-4a74-9e5f-7fd37c0d5830" (UID: "1ed559e0-37b1-4a74-9e5f-7fd37c0d5830"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:30:03 crc kubenswrapper[4832]: I1002 19:30:03.948802 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-kube-api-access-j7t45" (OuterVolumeSpecName: "kube-api-access-j7t45") pod "1ed559e0-37b1-4a74-9e5f-7fd37c0d5830" (UID: "1ed559e0-37b1-4a74-9e5f-7fd37c0d5830"). InnerVolumeSpecName "kube-api-access-j7t45". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:30:04 crc kubenswrapper[4832]: I1002 19:30:04.043788 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 19:30:04 crc kubenswrapper[4832]: I1002 19:30:04.043835 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7t45\" (UniqueName: \"kubernetes.io/projected/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-kube-api-access-j7t45\") on node \"crc\" DevicePath \"\"" Oct 02 19:30:04 crc kubenswrapper[4832]: I1002 19:30:04.043848 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 02 19:30:04 crc kubenswrapper[4832]: I1002 19:30:04.169416 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" event={"ID":"1ed559e0-37b1-4a74-9e5f-7fd37c0d5830","Type":"ContainerDied","Data":"036f765a1856363d38551a16c5a661e5d772322c31521e57a1e490bcd71327d3"} Oct 02 19:30:04 crc kubenswrapper[4832]: I1002 19:30:04.169731 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="036f765a1856363d38551a16c5a661e5d772322c31521e57a1e490bcd71327d3" Oct 02 19:30:04 crc kubenswrapper[4832]: I1002 19:30:04.169659 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt" Oct 02 19:30:04 crc kubenswrapper[4832]: I1002 19:30:04.901115 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh"] Oct 02 19:30:04 crc kubenswrapper[4832]: I1002 19:30:04.912434 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323845-k47mh"] Oct 02 19:30:05 crc kubenswrapper[4832]: I1002 19:30:05.244430 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="499818db-8995-4563-9226-7ed704208bc6" path="/var/lib/kubelet/pods/499818db-8995-4563-9226-7ed704208bc6/volumes" Oct 02 19:30:20 crc kubenswrapper[4832]: I1002 19:30:20.398866 4832 scope.go:117] "RemoveContainer" containerID="69d8627734ffa6f4e67c2ff248aba21562db8a9ee8711d357d180c9e3ab38850" Oct 02 19:32:26 crc kubenswrapper[4832]: I1002 19:32:26.875435 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:32:26 crc kubenswrapper[4832]: I1002 19:32:26.876198 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:32:56 crc kubenswrapper[4832]: I1002 19:32:56.875912 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:32:56 crc kubenswrapper[4832]: I1002 19:32:56.876568 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:33:26 crc kubenswrapper[4832]: I1002 19:33:26.875339 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:33:26 crc kubenswrapper[4832]: I1002 19:33:26.877227 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:33:26 crc kubenswrapper[4832]: I1002 19:33:26.877298 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 19:33:26 crc kubenswrapper[4832]: I1002 19:33:26.878350 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5ed6964e6c71b0e0b79de8e00d22be8045ed670a30b9660848780294aa6bf2f"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 19:33:26 crc kubenswrapper[4832]: I1002 19:33:26.878410 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://d5ed6964e6c71b0e0b79de8e00d22be8045ed670a30b9660848780294aa6bf2f" gracePeriod=600 Oct 02 19:33:27 crc kubenswrapper[4832]: I1002 19:33:27.770315 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="d5ed6964e6c71b0e0b79de8e00d22be8045ed670a30b9660848780294aa6bf2f" exitCode=0 Oct 02 19:33:27 crc kubenswrapper[4832]: I1002 19:33:27.770375 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"d5ed6964e6c71b0e0b79de8e00d22be8045ed670a30b9660848780294aa6bf2f"} Oct 02 19:33:27 crc kubenswrapper[4832]: I1002 19:33:27.770898 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0"} Oct 02 19:33:27 crc kubenswrapper[4832]: I1002 19:33:27.770936 4832 scope.go:117] "RemoveContainer" containerID="c2f52cd5e3c913175d8b125855c67234e2865c924e3ba76f11d3ff7f01925c17" Oct 02 19:33:56 crc kubenswrapper[4832]: E1002 19:33:56.094128 4832 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.180:47070->38.102.83.180:36377: write tcp 38.102.83.180:47070->38.102.83.180:36377: write: broken pipe Oct 02 19:35:56 crc kubenswrapper[4832]: I1002 19:35:56.875870 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:35:56 crc kubenswrapper[4832]: I1002 19:35:56.876523 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.004503 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7czlm"] Oct 02 19:36:00 crc kubenswrapper[4832]: E1002 19:36:00.007351 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ed559e0-37b1-4a74-9e5f-7fd37c0d5830" containerName="collect-profiles" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.007550 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ed559e0-37b1-4a74-9e5f-7fd37c0d5830" containerName="collect-profiles" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.008213 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ed559e0-37b1-4a74-9e5f-7fd37c0d5830" containerName="collect-profiles" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.011687 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.031973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8619c269-c8f7-405a-935a-664d2ee6ce67-catalog-content\") pod \"redhat-operators-7czlm\" (UID: \"8619c269-c8f7-405a-935a-664d2ee6ce67\") " pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.032069 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfqd5\" (UniqueName: \"kubernetes.io/projected/8619c269-c8f7-405a-935a-664d2ee6ce67-kube-api-access-tfqd5\") pod \"redhat-operators-7czlm\" (UID: \"8619c269-c8f7-405a-935a-664d2ee6ce67\") " pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.032224 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8619c269-c8f7-405a-935a-664d2ee6ce67-utilities\") pod \"redhat-operators-7czlm\" (UID: \"8619c269-c8f7-405a-935a-664d2ee6ce67\") " pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.044317 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7czlm"] Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.134473 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8619c269-c8f7-405a-935a-664d2ee6ce67-catalog-content\") pod \"redhat-operators-7czlm\" (UID: \"8619c269-c8f7-405a-935a-664d2ee6ce67\") " pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.134533 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfqd5\" (UniqueName: \"kubernetes.io/projected/8619c269-c8f7-405a-935a-664d2ee6ce67-kube-api-access-tfqd5\") pod \"redhat-operators-7czlm\" (UID: \"8619c269-c8f7-405a-935a-664d2ee6ce67\") " pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.134594 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8619c269-c8f7-405a-935a-664d2ee6ce67-utilities\") pod \"redhat-operators-7czlm\" (UID: \"8619c269-c8f7-405a-935a-664d2ee6ce67\") " pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.135030 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8619c269-c8f7-405a-935a-664d2ee6ce67-catalog-content\") pod \"redhat-operators-7czlm\" (UID: \"8619c269-c8f7-405a-935a-664d2ee6ce67\") " pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.135030 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8619c269-c8f7-405a-935a-664d2ee6ce67-utilities\") pod \"redhat-operators-7czlm\" (UID: \"8619c269-c8f7-405a-935a-664d2ee6ce67\") " pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.156185 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfqd5\" (UniqueName: \"kubernetes.io/projected/8619c269-c8f7-405a-935a-664d2ee6ce67-kube-api-access-tfqd5\") pod \"redhat-operators-7czlm\" (UID: \"8619c269-c8f7-405a-935a-664d2ee6ce67\") " pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.341242 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:00 crc kubenswrapper[4832]: I1002 19:36:00.870499 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7czlm"] Oct 02 19:36:01 crc kubenswrapper[4832]: I1002 19:36:01.755445 4832 generic.go:334] "Generic (PLEG): container finished" podID="8619c269-c8f7-405a-935a-664d2ee6ce67" containerID="f40bef4ed81ba872bd9685278efbac0a17c46617939be1a5163fdd56e7f9adcb" exitCode=0 Oct 02 19:36:01 crc kubenswrapper[4832]: I1002 19:36:01.755529 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7czlm" event={"ID":"8619c269-c8f7-405a-935a-664d2ee6ce67","Type":"ContainerDied","Data":"f40bef4ed81ba872bd9685278efbac0a17c46617939be1a5163fdd56e7f9adcb"} Oct 02 19:36:01 crc kubenswrapper[4832]: I1002 19:36:01.756848 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7czlm" event={"ID":"8619c269-c8f7-405a-935a-664d2ee6ce67","Type":"ContainerStarted","Data":"f87bb9bf0010fe3a7e3ab9d853ca2b07822781f94a6c6bd74e42cf07edcc5ba7"} Oct 02 19:36:01 crc kubenswrapper[4832]: I1002 19:36:01.758210 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 19:36:03 crc kubenswrapper[4832]: I1002 19:36:03.788640 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7czlm" event={"ID":"8619c269-c8f7-405a-935a-664d2ee6ce67","Type":"ContainerStarted","Data":"70bf37ae00b609573242ade0cc771ad346a9d7ff936202181d913d2b3c9cb533"} Oct 02 19:36:07 crc kubenswrapper[4832]: I1002 19:36:07.834375 4832 generic.go:334] "Generic (PLEG): container finished" podID="8619c269-c8f7-405a-935a-664d2ee6ce67" containerID="70bf37ae00b609573242ade0cc771ad346a9d7ff936202181d913d2b3c9cb533" exitCode=0 Oct 02 19:36:07 crc kubenswrapper[4832]: I1002 19:36:07.834497 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7czlm" event={"ID":"8619c269-c8f7-405a-935a-664d2ee6ce67","Type":"ContainerDied","Data":"70bf37ae00b609573242ade0cc771ad346a9d7ff936202181d913d2b3c9cb533"} Oct 02 19:36:08 crc kubenswrapper[4832]: I1002 19:36:08.849671 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7czlm" event={"ID":"8619c269-c8f7-405a-935a-664d2ee6ce67","Type":"ContainerStarted","Data":"5c287fe674581cf5339ae0bc47c0f435727b1ce813bec5c103dbc2fb92f9f2f6"} Oct 02 19:36:08 crc kubenswrapper[4832]: I1002 19:36:08.876050 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7czlm" podStartSLOduration=3.355165315 podStartE2EDuration="9.876035618s" podCreationTimestamp="2025-10-02 19:35:59 +0000 UTC" firstStartedPulling="2025-10-02 19:36:01.757907505 +0000 UTC m=+4518.727350377" lastFinishedPulling="2025-10-02 19:36:08.278777798 +0000 UTC m=+4525.248220680" observedRunningTime="2025-10-02 19:36:08.874373386 +0000 UTC m=+4525.843816248" watchObservedRunningTime="2025-10-02 19:36:08.876035618 +0000 UTC m=+4525.845478490" Oct 02 19:36:10 crc kubenswrapper[4832]: I1002 19:36:10.341577 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:10 crc kubenswrapper[4832]: I1002 19:36:10.341983 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:11 crc kubenswrapper[4832]: I1002 19:36:11.410428 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7czlm" podUID="8619c269-c8f7-405a-935a-664d2ee6ce67" containerName="registry-server" probeResult="failure" output=< Oct 02 19:36:11 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 19:36:11 crc kubenswrapper[4832]: > Oct 02 19:36:20 crc kubenswrapper[4832]: I1002 19:36:20.414483 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:20 crc kubenswrapper[4832]: I1002 19:36:20.474601 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:20 crc kubenswrapper[4832]: I1002 19:36:20.659805 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7czlm"] Oct 02 19:36:22 crc kubenswrapper[4832]: I1002 19:36:22.017013 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7czlm" podUID="8619c269-c8f7-405a-935a-664d2ee6ce67" containerName="registry-server" containerID="cri-o://5c287fe674581cf5339ae0bc47c0f435727b1ce813bec5c103dbc2fb92f9f2f6" gracePeriod=2 Oct 02 19:36:22 crc kubenswrapper[4832]: I1002 19:36:22.617370 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:22 crc kubenswrapper[4832]: I1002 19:36:22.766680 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8619c269-c8f7-405a-935a-664d2ee6ce67-utilities\") pod \"8619c269-c8f7-405a-935a-664d2ee6ce67\" (UID: \"8619c269-c8f7-405a-935a-664d2ee6ce67\") " Oct 02 19:36:22 crc kubenswrapper[4832]: I1002 19:36:22.766928 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8619c269-c8f7-405a-935a-664d2ee6ce67-catalog-content\") pod \"8619c269-c8f7-405a-935a-664d2ee6ce67\" (UID: \"8619c269-c8f7-405a-935a-664d2ee6ce67\") " Oct 02 19:36:22 crc kubenswrapper[4832]: I1002 19:36:22.767213 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfqd5\" (UniqueName: \"kubernetes.io/projected/8619c269-c8f7-405a-935a-664d2ee6ce67-kube-api-access-tfqd5\") pod \"8619c269-c8f7-405a-935a-664d2ee6ce67\" (UID: \"8619c269-c8f7-405a-935a-664d2ee6ce67\") " Oct 02 19:36:22 crc kubenswrapper[4832]: I1002 19:36:22.767720 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8619c269-c8f7-405a-935a-664d2ee6ce67-utilities" (OuterVolumeSpecName: "utilities") pod "8619c269-c8f7-405a-935a-664d2ee6ce67" (UID: "8619c269-c8f7-405a-935a-664d2ee6ce67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:36:22 crc kubenswrapper[4832]: I1002 19:36:22.768617 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8619c269-c8f7-405a-935a-664d2ee6ce67-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:36:22 crc kubenswrapper[4832]: I1002 19:36:22.774331 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8619c269-c8f7-405a-935a-664d2ee6ce67-kube-api-access-tfqd5" (OuterVolumeSpecName: "kube-api-access-tfqd5") pod "8619c269-c8f7-405a-935a-664d2ee6ce67" (UID: "8619c269-c8f7-405a-935a-664d2ee6ce67"). InnerVolumeSpecName "kube-api-access-tfqd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:36:22 crc kubenswrapper[4832]: I1002 19:36:22.866128 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8619c269-c8f7-405a-935a-664d2ee6ce67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8619c269-c8f7-405a-935a-664d2ee6ce67" (UID: "8619c269-c8f7-405a-935a-664d2ee6ce67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:36:22 crc kubenswrapper[4832]: I1002 19:36:22.871361 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8619c269-c8f7-405a-935a-664d2ee6ce67-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:36:22 crc kubenswrapper[4832]: I1002 19:36:22.871411 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfqd5\" (UniqueName: \"kubernetes.io/projected/8619c269-c8f7-405a-935a-664d2ee6ce67-kube-api-access-tfqd5\") on node \"crc\" DevicePath \"\"" Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.028601 4832 generic.go:334] "Generic (PLEG): container finished" podID="8619c269-c8f7-405a-935a-664d2ee6ce67" containerID="5c287fe674581cf5339ae0bc47c0f435727b1ce813bec5c103dbc2fb92f9f2f6" exitCode=0 Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.028698 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7czlm" Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.028937 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7czlm" event={"ID":"8619c269-c8f7-405a-935a-664d2ee6ce67","Type":"ContainerDied","Data":"5c287fe674581cf5339ae0bc47c0f435727b1ce813bec5c103dbc2fb92f9f2f6"} Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.028982 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7czlm" event={"ID":"8619c269-c8f7-405a-935a-664d2ee6ce67","Type":"ContainerDied","Data":"f87bb9bf0010fe3a7e3ab9d853ca2b07822781f94a6c6bd74e42cf07edcc5ba7"} Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.029025 4832 scope.go:117] "RemoveContainer" containerID="5c287fe674581cf5339ae0bc47c0f435727b1ce813bec5c103dbc2fb92f9f2f6" Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.059449 4832 scope.go:117] "RemoveContainer" containerID="70bf37ae00b609573242ade0cc771ad346a9d7ff936202181d913d2b3c9cb533" Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.075460 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7czlm"] Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.088608 4832 scope.go:117] "RemoveContainer" containerID="f40bef4ed81ba872bd9685278efbac0a17c46617939be1a5163fdd56e7f9adcb" Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.091034 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7czlm"] Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.154220 4832 scope.go:117] "RemoveContainer" containerID="5c287fe674581cf5339ae0bc47c0f435727b1ce813bec5c103dbc2fb92f9f2f6" Oct 02 19:36:23 crc kubenswrapper[4832]: E1002 19:36:23.154772 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c287fe674581cf5339ae0bc47c0f435727b1ce813bec5c103dbc2fb92f9f2f6\": container with ID starting with 5c287fe674581cf5339ae0bc47c0f435727b1ce813bec5c103dbc2fb92f9f2f6 not found: ID does not exist" containerID="5c287fe674581cf5339ae0bc47c0f435727b1ce813bec5c103dbc2fb92f9f2f6" Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.154807 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c287fe674581cf5339ae0bc47c0f435727b1ce813bec5c103dbc2fb92f9f2f6"} err="failed to get container status \"5c287fe674581cf5339ae0bc47c0f435727b1ce813bec5c103dbc2fb92f9f2f6\": rpc error: code = NotFound desc = could not find container \"5c287fe674581cf5339ae0bc47c0f435727b1ce813bec5c103dbc2fb92f9f2f6\": container with ID starting with 5c287fe674581cf5339ae0bc47c0f435727b1ce813bec5c103dbc2fb92f9f2f6 not found: ID does not exist" Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.154837 4832 scope.go:117] "RemoveContainer" containerID="70bf37ae00b609573242ade0cc771ad346a9d7ff936202181d913d2b3c9cb533" Oct 02 19:36:23 crc kubenswrapper[4832]: E1002 19:36:23.156456 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70bf37ae00b609573242ade0cc771ad346a9d7ff936202181d913d2b3c9cb533\": container with ID starting with 70bf37ae00b609573242ade0cc771ad346a9d7ff936202181d913d2b3c9cb533 not found: ID does not exist" containerID="70bf37ae00b609573242ade0cc771ad346a9d7ff936202181d913d2b3c9cb533" Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.156493 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70bf37ae00b609573242ade0cc771ad346a9d7ff936202181d913d2b3c9cb533"} err="failed to get container status \"70bf37ae00b609573242ade0cc771ad346a9d7ff936202181d913d2b3c9cb533\": rpc error: code = NotFound desc = could not find container \"70bf37ae00b609573242ade0cc771ad346a9d7ff936202181d913d2b3c9cb533\": container with ID starting with 70bf37ae00b609573242ade0cc771ad346a9d7ff936202181d913d2b3c9cb533 not found: ID does not exist" Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.156513 4832 scope.go:117] "RemoveContainer" containerID="f40bef4ed81ba872bd9685278efbac0a17c46617939be1a5163fdd56e7f9adcb" Oct 02 19:36:23 crc kubenswrapper[4832]: E1002 19:36:23.156756 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f40bef4ed81ba872bd9685278efbac0a17c46617939be1a5163fdd56e7f9adcb\": container with ID starting with f40bef4ed81ba872bd9685278efbac0a17c46617939be1a5163fdd56e7f9adcb not found: ID does not exist" containerID="f40bef4ed81ba872bd9685278efbac0a17c46617939be1a5163fdd56e7f9adcb" Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.156788 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f40bef4ed81ba872bd9685278efbac0a17c46617939be1a5163fdd56e7f9adcb"} err="failed to get container status \"f40bef4ed81ba872bd9685278efbac0a17c46617939be1a5163fdd56e7f9adcb\": rpc error: code = NotFound desc = could not find container \"f40bef4ed81ba872bd9685278efbac0a17c46617939be1a5163fdd56e7f9adcb\": container with ID starting with f40bef4ed81ba872bd9685278efbac0a17c46617939be1a5163fdd56e7f9adcb not found: ID does not exist" Oct 02 19:36:23 crc kubenswrapper[4832]: I1002 19:36:23.235557 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8619c269-c8f7-405a-935a-664d2ee6ce67" path="/var/lib/kubelet/pods/8619c269-c8f7-405a-935a-664d2ee6ce67/volumes" Oct 02 19:36:26 crc kubenswrapper[4832]: I1002 19:36:26.875544 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:36:26 crc kubenswrapper[4832]: I1002 19:36:26.876225 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:36:38 crc kubenswrapper[4832]: E1002 19:36:38.582194 4832 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.180:34522->38.102.83.180:36377: write tcp 38.102.83.180:34522->38.102.83.180:36377: write: broken pipe Oct 02 19:36:56 crc kubenswrapper[4832]: I1002 19:36:56.875195 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:36:56 crc kubenswrapper[4832]: I1002 19:36:56.875833 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:36:56 crc kubenswrapper[4832]: I1002 19:36:56.875898 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 19:36:56 crc kubenswrapper[4832]: I1002 19:36:56.876917 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 19:36:56 crc kubenswrapper[4832]: I1002 19:36:56.877011 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" gracePeriod=600 Oct 02 19:36:57 crc kubenswrapper[4832]: E1002 19:36:57.005040 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:36:57 crc kubenswrapper[4832]: I1002 19:36:57.526183 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" exitCode=0 Oct 02 19:36:57 crc kubenswrapper[4832]: I1002 19:36:57.526294 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0"} Oct 02 19:36:57 crc kubenswrapper[4832]: I1002 19:36:57.526658 4832 scope.go:117] "RemoveContainer" containerID="d5ed6964e6c71b0e0b79de8e00d22be8045ed670a30b9660848780294aa6bf2f" Oct 02 19:36:57 crc kubenswrapper[4832]: I1002 19:36:57.527731 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:36:57 crc kubenswrapper[4832]: E1002 19:36:57.528211 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:37:12 crc kubenswrapper[4832]: I1002 19:37:12.224164 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:37:12 crc kubenswrapper[4832]: E1002 19:37:12.225620 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:37:25 crc kubenswrapper[4832]: I1002 19:37:25.234008 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:37:25 crc kubenswrapper[4832]: E1002 19:37:25.235159 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:37:38 crc kubenswrapper[4832]: I1002 19:37:38.224385 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:37:38 crc kubenswrapper[4832]: E1002 19:37:38.226164 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:37:53 crc kubenswrapper[4832]: I1002 19:37:53.223763 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:37:53 crc kubenswrapper[4832]: E1002 19:37:53.224919 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:38:07 crc kubenswrapper[4832]: I1002 19:38:07.223513 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:38:07 crc kubenswrapper[4832]: E1002 19:38:07.224707 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:38:20 crc kubenswrapper[4832]: I1002 19:38:20.223318 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:38:20 crc kubenswrapper[4832]: E1002 19:38:20.224110 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:38:31 crc kubenswrapper[4832]: I1002 19:38:31.222810 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:38:31 crc kubenswrapper[4832]: E1002 19:38:31.223624 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:38:45 crc kubenswrapper[4832]: I1002 19:38:45.233104 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:38:45 crc kubenswrapper[4832]: E1002 19:38:45.234035 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:39:00 crc kubenswrapper[4832]: I1002 19:39:00.223145 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:39:00 crc kubenswrapper[4832]: E1002 19:39:00.224519 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:39:09 crc kubenswrapper[4832]: I1002 19:39:09.875513 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jzdzl"] Oct 02 19:39:09 crc kubenswrapper[4832]: E1002 19:39:09.876594 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8619c269-c8f7-405a-935a-664d2ee6ce67" containerName="extract-utilities" Oct 02 19:39:09 crc kubenswrapper[4832]: I1002 19:39:09.876609 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8619c269-c8f7-405a-935a-664d2ee6ce67" containerName="extract-utilities" Oct 02 19:39:09 crc kubenswrapper[4832]: E1002 19:39:09.876648 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8619c269-c8f7-405a-935a-664d2ee6ce67" containerName="registry-server" Oct 02 19:39:09 crc kubenswrapper[4832]: I1002 19:39:09.876656 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8619c269-c8f7-405a-935a-664d2ee6ce67" containerName="registry-server" Oct 02 19:39:09 crc kubenswrapper[4832]: E1002 19:39:09.876684 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8619c269-c8f7-405a-935a-664d2ee6ce67" containerName="extract-content" Oct 02 19:39:09 crc kubenswrapper[4832]: I1002 19:39:09.876692 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="8619c269-c8f7-405a-935a-664d2ee6ce67" containerName="extract-content" Oct 02 19:39:09 crc kubenswrapper[4832]: I1002 19:39:09.876952 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="8619c269-c8f7-405a-935a-664d2ee6ce67" containerName="registry-server" Oct 02 19:39:09 crc kubenswrapper[4832]: I1002 19:39:09.879371 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:09 crc kubenswrapper[4832]: I1002 19:39:09.897220 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jzdzl"] Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.033952 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsng9\" (UniqueName: \"kubernetes.io/projected/deaff448-1773-41bb-a28d-1f54ad88c5f1-kube-api-access-lsng9\") pod \"community-operators-jzdzl\" (UID: \"deaff448-1773-41bb-a28d-1f54ad88c5f1\") " pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.034065 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deaff448-1773-41bb-a28d-1f54ad88c5f1-utilities\") pod \"community-operators-jzdzl\" (UID: \"deaff448-1773-41bb-a28d-1f54ad88c5f1\") " pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.034146 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deaff448-1773-41bb-a28d-1f54ad88c5f1-catalog-content\") pod \"community-operators-jzdzl\" (UID: \"deaff448-1773-41bb-a28d-1f54ad88c5f1\") " pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.047028 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4srsf"] Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.049410 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.089328 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4srsf"] Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.136840 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deaff448-1773-41bb-a28d-1f54ad88c5f1-utilities\") pod \"community-operators-jzdzl\" (UID: \"deaff448-1773-41bb-a28d-1f54ad88c5f1\") " pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.136934 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deaff448-1773-41bb-a28d-1f54ad88c5f1-catalog-content\") pod \"community-operators-jzdzl\" (UID: \"deaff448-1773-41bb-a28d-1f54ad88c5f1\") " pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.137178 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsng9\" (UniqueName: \"kubernetes.io/projected/deaff448-1773-41bb-a28d-1f54ad88c5f1-kube-api-access-lsng9\") pod \"community-operators-jzdzl\" (UID: \"deaff448-1773-41bb-a28d-1f54ad88c5f1\") " pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.137393 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deaff448-1773-41bb-a28d-1f54ad88c5f1-utilities\") pod \"community-operators-jzdzl\" (UID: \"deaff448-1773-41bb-a28d-1f54ad88c5f1\") " pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.137679 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deaff448-1773-41bb-a28d-1f54ad88c5f1-catalog-content\") pod \"community-operators-jzdzl\" (UID: \"deaff448-1773-41bb-a28d-1f54ad88c5f1\") " pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.160280 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsng9\" (UniqueName: \"kubernetes.io/projected/deaff448-1773-41bb-a28d-1f54ad88c5f1-kube-api-access-lsng9\") pod \"community-operators-jzdzl\" (UID: \"deaff448-1773-41bb-a28d-1f54ad88c5f1\") " pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.212764 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.239059 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/371974f9-e3a1-4827-966c-4db01f0d3667-utilities\") pod \"redhat-marketplace-4srsf\" (UID: \"371974f9-e3a1-4827-966c-4db01f0d3667\") " pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.239301 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbhfq\" (UniqueName: \"kubernetes.io/projected/371974f9-e3a1-4827-966c-4db01f0d3667-kube-api-access-cbhfq\") pod \"redhat-marketplace-4srsf\" (UID: \"371974f9-e3a1-4827-966c-4db01f0d3667\") " pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.239347 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/371974f9-e3a1-4827-966c-4db01f0d3667-catalog-content\") pod \"redhat-marketplace-4srsf\" (UID: \"371974f9-e3a1-4827-966c-4db01f0d3667\") " pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.341957 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/371974f9-e3a1-4827-966c-4db01f0d3667-utilities\") pod \"redhat-marketplace-4srsf\" (UID: \"371974f9-e3a1-4827-966c-4db01f0d3667\") " pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.342032 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbhfq\" (UniqueName: \"kubernetes.io/projected/371974f9-e3a1-4827-966c-4db01f0d3667-kube-api-access-cbhfq\") pod \"redhat-marketplace-4srsf\" (UID: \"371974f9-e3a1-4827-966c-4db01f0d3667\") " pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.342091 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/371974f9-e3a1-4827-966c-4db01f0d3667-catalog-content\") pod \"redhat-marketplace-4srsf\" (UID: \"371974f9-e3a1-4827-966c-4db01f0d3667\") " pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.342851 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/371974f9-e3a1-4827-966c-4db01f0d3667-catalog-content\") pod \"redhat-marketplace-4srsf\" (UID: \"371974f9-e3a1-4827-966c-4db01f0d3667\") " pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.343090 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/371974f9-e3a1-4827-966c-4db01f0d3667-utilities\") pod \"redhat-marketplace-4srsf\" (UID: \"371974f9-e3a1-4827-966c-4db01f0d3667\") " pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.366185 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbhfq\" (UniqueName: \"kubernetes.io/projected/371974f9-e3a1-4827-966c-4db01f0d3667-kube-api-access-cbhfq\") pod \"redhat-marketplace-4srsf\" (UID: \"371974f9-e3a1-4827-966c-4db01f0d3667\") " pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.372859 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.741308 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jzdzl"] Oct 02 19:39:10 crc kubenswrapper[4832]: I1002 19:39:10.934954 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4srsf"] Oct 02 19:39:10 crc kubenswrapper[4832]: W1002 19:39:10.938288 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod371974f9_e3a1_4827_966c_4db01f0d3667.slice/crio-6c7a31069de6efc3e5aeab4e137e7a3bb1a211e9a2a06de4979bf5e8d86684ee WatchSource:0}: Error finding container 6c7a31069de6efc3e5aeab4e137e7a3bb1a211e9a2a06de4979bf5e8d86684ee: Status 404 returned error can't find the container with id 6c7a31069de6efc3e5aeab4e137e7a3bb1a211e9a2a06de4979bf5e8d86684ee Oct 02 19:39:11 crc kubenswrapper[4832]: I1002 19:39:11.349330 4832 generic.go:334] "Generic (PLEG): container finished" podID="371974f9-e3a1-4827-966c-4db01f0d3667" containerID="ba5629c6e01a1cbd16af02a85c3aceca95ad3d4596f35c3de4c64abb69f803b9" exitCode=0 Oct 02 19:39:11 crc kubenswrapper[4832]: I1002 19:39:11.349512 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4srsf" event={"ID":"371974f9-e3a1-4827-966c-4db01f0d3667","Type":"ContainerDied","Data":"ba5629c6e01a1cbd16af02a85c3aceca95ad3d4596f35c3de4c64abb69f803b9"} Oct 02 19:39:11 crc kubenswrapper[4832]: I1002 19:39:11.349580 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4srsf" event={"ID":"371974f9-e3a1-4827-966c-4db01f0d3667","Type":"ContainerStarted","Data":"6c7a31069de6efc3e5aeab4e137e7a3bb1a211e9a2a06de4979bf5e8d86684ee"} Oct 02 19:39:11 crc kubenswrapper[4832]: I1002 19:39:11.352200 4832 generic.go:334] "Generic (PLEG): container finished" podID="deaff448-1773-41bb-a28d-1f54ad88c5f1" containerID="e97e1961c6643409cabb1f994483c0a8903320dc099503da0acc50daf1f69f9f" exitCode=0 Oct 02 19:39:11 crc kubenswrapper[4832]: I1002 19:39:11.352248 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzdzl" event={"ID":"deaff448-1773-41bb-a28d-1f54ad88c5f1","Type":"ContainerDied","Data":"e97e1961c6643409cabb1f994483c0a8903320dc099503da0acc50daf1f69f9f"} Oct 02 19:39:11 crc kubenswrapper[4832]: I1002 19:39:11.352291 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzdzl" event={"ID":"deaff448-1773-41bb-a28d-1f54ad88c5f1","Type":"ContainerStarted","Data":"74f18264e3b113cf90f41483d6a510033233b75b29afc379373abcc40705aa67"} Oct 02 19:39:12 crc kubenswrapper[4832]: I1002 19:39:12.454863 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hbbxl"] Oct 02 19:39:12 crc kubenswrapper[4832]: I1002 19:39:12.460922 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:12 crc kubenswrapper[4832]: I1002 19:39:12.475686 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hbbxl"] Oct 02 19:39:12 crc kubenswrapper[4832]: I1002 19:39:12.500887 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94r44\" (UniqueName: \"kubernetes.io/projected/910ec153-249e-4497-a55b-97c0bbe34ca0-kube-api-access-94r44\") pod \"certified-operators-hbbxl\" (UID: \"910ec153-249e-4497-a55b-97c0bbe34ca0\") " pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:12 crc kubenswrapper[4832]: I1002 19:39:12.501043 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/910ec153-249e-4497-a55b-97c0bbe34ca0-catalog-content\") pod \"certified-operators-hbbxl\" (UID: \"910ec153-249e-4497-a55b-97c0bbe34ca0\") " pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:12 crc kubenswrapper[4832]: I1002 19:39:12.501460 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/910ec153-249e-4497-a55b-97c0bbe34ca0-utilities\") pod \"certified-operators-hbbxl\" (UID: \"910ec153-249e-4497-a55b-97c0bbe34ca0\") " pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:12 crc kubenswrapper[4832]: I1002 19:39:12.603971 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/910ec153-249e-4497-a55b-97c0bbe34ca0-utilities\") pod \"certified-operators-hbbxl\" (UID: \"910ec153-249e-4497-a55b-97c0bbe34ca0\") " pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:12 crc kubenswrapper[4832]: I1002 19:39:12.604147 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94r44\" (UniqueName: \"kubernetes.io/projected/910ec153-249e-4497-a55b-97c0bbe34ca0-kube-api-access-94r44\") pod \"certified-operators-hbbxl\" (UID: \"910ec153-249e-4497-a55b-97c0bbe34ca0\") " pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:12 crc kubenswrapper[4832]: I1002 19:39:12.604216 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/910ec153-249e-4497-a55b-97c0bbe34ca0-catalog-content\") pod \"certified-operators-hbbxl\" (UID: \"910ec153-249e-4497-a55b-97c0bbe34ca0\") " pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:12 crc kubenswrapper[4832]: I1002 19:39:12.604418 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/910ec153-249e-4497-a55b-97c0bbe34ca0-utilities\") pod \"certified-operators-hbbxl\" (UID: \"910ec153-249e-4497-a55b-97c0bbe34ca0\") " pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:12 crc kubenswrapper[4832]: I1002 19:39:12.604789 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/910ec153-249e-4497-a55b-97c0bbe34ca0-catalog-content\") pod \"certified-operators-hbbxl\" (UID: \"910ec153-249e-4497-a55b-97c0bbe34ca0\") " pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:12 crc kubenswrapper[4832]: I1002 19:39:12.631689 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94r44\" (UniqueName: \"kubernetes.io/projected/910ec153-249e-4497-a55b-97c0bbe34ca0-kube-api-access-94r44\") pod \"certified-operators-hbbxl\" (UID: \"910ec153-249e-4497-a55b-97c0bbe34ca0\") " pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:12 crc kubenswrapper[4832]: I1002 19:39:12.799448 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:13 crc kubenswrapper[4832]: I1002 19:39:13.223227 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:39:13 crc kubenswrapper[4832]: E1002 19:39:13.224009 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:39:13 crc kubenswrapper[4832]: I1002 19:39:13.334538 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hbbxl"] Oct 02 19:39:13 crc kubenswrapper[4832]: I1002 19:39:13.381825 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbbxl" event={"ID":"910ec153-249e-4497-a55b-97c0bbe34ca0","Type":"ContainerStarted","Data":"17d98b2bdb9273444c71802c1e5db924701755aa3130bc91e2bc8182460d32e3"} Oct 02 19:39:13 crc kubenswrapper[4832]: I1002 19:39:13.384973 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4srsf" event={"ID":"371974f9-e3a1-4827-966c-4db01f0d3667","Type":"ContainerStarted","Data":"8a2f62fbd0eefa26aef25c33a1cc34d98b9c59ecdcf00857f337d19b07ca305c"} Oct 02 19:39:13 crc kubenswrapper[4832]: I1002 19:39:13.390040 4832 generic.go:334] "Generic (PLEG): container finished" podID="deaff448-1773-41bb-a28d-1f54ad88c5f1" containerID="6fbbf70ce6064ee385a186169433e4cbca953938b55fe186813b7f07117d2347" exitCode=0 Oct 02 19:39:13 crc kubenswrapper[4832]: I1002 19:39:13.390099 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzdzl" event={"ID":"deaff448-1773-41bb-a28d-1f54ad88c5f1","Type":"ContainerDied","Data":"6fbbf70ce6064ee385a186169433e4cbca953938b55fe186813b7f07117d2347"} Oct 02 19:39:14 crc kubenswrapper[4832]: I1002 19:39:14.405347 4832 generic.go:334] "Generic (PLEG): container finished" podID="910ec153-249e-4497-a55b-97c0bbe34ca0" containerID="f8c0c0744ac0a7a6e77938e77a75285b7abfca02de72f728e58e03871453c499" exitCode=0 Oct 02 19:39:14 crc kubenswrapper[4832]: I1002 19:39:14.405507 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbbxl" event={"ID":"910ec153-249e-4497-a55b-97c0bbe34ca0","Type":"ContainerDied","Data":"f8c0c0744ac0a7a6e77938e77a75285b7abfca02de72f728e58e03871453c499"} Oct 02 19:39:14 crc kubenswrapper[4832]: I1002 19:39:14.418881 4832 generic.go:334] "Generic (PLEG): container finished" podID="371974f9-e3a1-4827-966c-4db01f0d3667" containerID="8a2f62fbd0eefa26aef25c33a1cc34d98b9c59ecdcf00857f337d19b07ca305c" exitCode=0 Oct 02 19:39:14 crc kubenswrapper[4832]: I1002 19:39:14.418967 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4srsf" event={"ID":"371974f9-e3a1-4827-966c-4db01f0d3667","Type":"ContainerDied","Data":"8a2f62fbd0eefa26aef25c33a1cc34d98b9c59ecdcf00857f337d19b07ca305c"} Oct 02 19:39:15 crc kubenswrapper[4832]: I1002 19:39:15.434643 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzdzl" event={"ID":"deaff448-1773-41bb-a28d-1f54ad88c5f1","Type":"ContainerStarted","Data":"9fe16eb045575a425f1ada3462024bbc3abac7d33beda55854df3a7159990d0b"} Oct 02 19:39:15 crc kubenswrapper[4832]: I1002 19:39:15.437643 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4srsf" event={"ID":"371974f9-e3a1-4827-966c-4db01f0d3667","Type":"ContainerStarted","Data":"7595e3dd8b91632dba8608cdb9ca26fc831a27ba4accbdcf0db9836fe9dc3239"} Oct 02 19:39:15 crc kubenswrapper[4832]: I1002 19:39:15.464719 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jzdzl" podStartSLOduration=3.776708609 podStartE2EDuration="6.46469567s" podCreationTimestamp="2025-10-02 19:39:09 +0000 UTC" firstStartedPulling="2025-10-02 19:39:11.356382201 +0000 UTC m=+4708.325825103" lastFinishedPulling="2025-10-02 19:39:14.044369282 +0000 UTC m=+4711.013812164" observedRunningTime="2025-10-02 19:39:15.454628849 +0000 UTC m=+4712.424071741" watchObservedRunningTime="2025-10-02 19:39:15.46469567 +0000 UTC m=+4712.434138602" Oct 02 19:39:15 crc kubenswrapper[4832]: I1002 19:39:15.486541 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4srsf" podStartSLOduration=1.9188548829999998 podStartE2EDuration="5.486520416s" podCreationTimestamp="2025-10-02 19:39:10 +0000 UTC" firstStartedPulling="2025-10-02 19:39:11.353559574 +0000 UTC m=+4708.323002446" lastFinishedPulling="2025-10-02 19:39:14.921225106 +0000 UTC m=+4711.890667979" observedRunningTime="2025-10-02 19:39:15.481908964 +0000 UTC m=+4712.451351836" watchObservedRunningTime="2025-10-02 19:39:15.486520416 +0000 UTC m=+4712.455963298" Oct 02 19:39:16 crc kubenswrapper[4832]: I1002 19:39:16.458300 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbbxl" event={"ID":"910ec153-249e-4497-a55b-97c0bbe34ca0","Type":"ContainerStarted","Data":"7093d484a80b091e474b19ede6a8847fcdccd7dff1bea8ceabb65816e6f0ae0f"} Oct 02 19:39:17 crc kubenswrapper[4832]: I1002 19:39:17.477986 4832 generic.go:334] "Generic (PLEG): container finished" podID="910ec153-249e-4497-a55b-97c0bbe34ca0" containerID="7093d484a80b091e474b19ede6a8847fcdccd7dff1bea8ceabb65816e6f0ae0f" exitCode=0 Oct 02 19:39:17 crc kubenswrapper[4832]: I1002 19:39:17.478049 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbbxl" event={"ID":"910ec153-249e-4497-a55b-97c0bbe34ca0","Type":"ContainerDied","Data":"7093d484a80b091e474b19ede6a8847fcdccd7dff1bea8ceabb65816e6f0ae0f"} Oct 02 19:39:18 crc kubenswrapper[4832]: I1002 19:39:18.506738 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbbxl" event={"ID":"910ec153-249e-4497-a55b-97c0bbe34ca0","Type":"ContainerStarted","Data":"ac75fdd671452639ca9f02ff3fdf908192e42bd56a179fd1e13e1e8877417f78"} Oct 02 19:39:18 crc kubenswrapper[4832]: I1002 19:39:18.550896 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hbbxl" podStartSLOduration=3.047607029 podStartE2EDuration="6.550877578s" podCreationTimestamp="2025-10-02 19:39:12 +0000 UTC" firstStartedPulling="2025-10-02 19:39:14.407472893 +0000 UTC m=+4711.376915775" lastFinishedPulling="2025-10-02 19:39:17.910743432 +0000 UTC m=+4714.880186324" observedRunningTime="2025-10-02 19:39:18.53995181 +0000 UTC m=+4715.509394692" watchObservedRunningTime="2025-10-02 19:39:18.550877578 +0000 UTC m=+4715.520320460" Oct 02 19:39:20 crc kubenswrapper[4832]: I1002 19:39:20.215010 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:20 crc kubenswrapper[4832]: I1002 19:39:20.215399 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:20 crc kubenswrapper[4832]: I1002 19:39:20.373595 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:20 crc kubenswrapper[4832]: I1002 19:39:20.373653 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:20 crc kubenswrapper[4832]: I1002 19:39:20.445134 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:20 crc kubenswrapper[4832]: I1002 19:39:20.587065 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:21 crc kubenswrapper[4832]: I1002 19:39:21.263769 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jzdzl" podUID="deaff448-1773-41bb-a28d-1f54ad88c5f1" containerName="registry-server" probeResult="failure" output=< Oct 02 19:39:21 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 19:39:21 crc kubenswrapper[4832]: > Oct 02 19:39:22 crc kubenswrapper[4832]: I1002 19:39:22.449465 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4srsf"] Oct 02 19:39:22 crc kubenswrapper[4832]: I1002 19:39:22.552611 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4srsf" podUID="371974f9-e3a1-4827-966c-4db01f0d3667" containerName="registry-server" containerID="cri-o://7595e3dd8b91632dba8608cdb9ca26fc831a27ba4accbdcf0db9836fe9dc3239" gracePeriod=2 Oct 02 19:39:22 crc kubenswrapper[4832]: I1002 19:39:22.799623 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:22 crc kubenswrapper[4832]: I1002 19:39:22.799783 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:22 crc kubenswrapper[4832]: I1002 19:39:22.855464 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.063182 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.207022 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbhfq\" (UniqueName: \"kubernetes.io/projected/371974f9-e3a1-4827-966c-4db01f0d3667-kube-api-access-cbhfq\") pod \"371974f9-e3a1-4827-966c-4db01f0d3667\" (UID: \"371974f9-e3a1-4827-966c-4db01f0d3667\") " Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.207083 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/371974f9-e3a1-4827-966c-4db01f0d3667-utilities\") pod \"371974f9-e3a1-4827-966c-4db01f0d3667\" (UID: \"371974f9-e3a1-4827-966c-4db01f0d3667\") " Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.207343 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/371974f9-e3a1-4827-966c-4db01f0d3667-catalog-content\") pod \"371974f9-e3a1-4827-966c-4db01f0d3667\" (UID: \"371974f9-e3a1-4827-966c-4db01f0d3667\") " Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.208349 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/371974f9-e3a1-4827-966c-4db01f0d3667-utilities" (OuterVolumeSpecName: "utilities") pod "371974f9-e3a1-4827-966c-4db01f0d3667" (UID: "371974f9-e3a1-4827-966c-4db01f0d3667"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.218706 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/371974f9-e3a1-4827-966c-4db01f0d3667-kube-api-access-cbhfq" (OuterVolumeSpecName: "kube-api-access-cbhfq") pod "371974f9-e3a1-4827-966c-4db01f0d3667" (UID: "371974f9-e3a1-4827-966c-4db01f0d3667"). InnerVolumeSpecName "kube-api-access-cbhfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.245544 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/371974f9-e3a1-4827-966c-4db01f0d3667-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "371974f9-e3a1-4827-966c-4db01f0d3667" (UID: "371974f9-e3a1-4827-966c-4db01f0d3667"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.310140 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbhfq\" (UniqueName: \"kubernetes.io/projected/371974f9-e3a1-4827-966c-4db01f0d3667-kube-api-access-cbhfq\") on node \"crc\" DevicePath \"\"" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.310175 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/371974f9-e3a1-4827-966c-4db01f0d3667-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.310187 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/371974f9-e3a1-4827-966c-4db01f0d3667-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.569636 4832 generic.go:334] "Generic (PLEG): container finished" podID="371974f9-e3a1-4827-966c-4db01f0d3667" containerID="7595e3dd8b91632dba8608cdb9ca26fc831a27ba4accbdcf0db9836fe9dc3239" exitCode=0 Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.569717 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4srsf" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.569726 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4srsf" event={"ID":"371974f9-e3a1-4827-966c-4db01f0d3667","Type":"ContainerDied","Data":"7595e3dd8b91632dba8608cdb9ca26fc831a27ba4accbdcf0db9836fe9dc3239"} Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.570346 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4srsf" event={"ID":"371974f9-e3a1-4827-966c-4db01f0d3667","Type":"ContainerDied","Data":"6c7a31069de6efc3e5aeab4e137e7a3bb1a211e9a2a06de4979bf5e8d86684ee"} Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.570411 4832 scope.go:117] "RemoveContainer" containerID="7595e3dd8b91632dba8608cdb9ca26fc831a27ba4accbdcf0db9836fe9dc3239" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.599433 4832 scope.go:117] "RemoveContainer" containerID="8a2f62fbd0eefa26aef25c33a1cc34d98b9c59ecdcf00857f337d19b07ca305c" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.624850 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4srsf"] Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.641486 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4srsf"] Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.648227 4832 scope.go:117] "RemoveContainer" containerID="ba5629c6e01a1cbd16af02a85c3aceca95ad3d4596f35c3de4c64abb69f803b9" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.662175 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.729366 4832 scope.go:117] "RemoveContainer" containerID="7595e3dd8b91632dba8608cdb9ca26fc831a27ba4accbdcf0db9836fe9dc3239" Oct 02 19:39:23 crc kubenswrapper[4832]: E1002 19:39:23.729713 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7595e3dd8b91632dba8608cdb9ca26fc831a27ba4accbdcf0db9836fe9dc3239\": container with ID starting with 7595e3dd8b91632dba8608cdb9ca26fc831a27ba4accbdcf0db9836fe9dc3239 not found: ID does not exist" containerID="7595e3dd8b91632dba8608cdb9ca26fc831a27ba4accbdcf0db9836fe9dc3239" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.729748 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7595e3dd8b91632dba8608cdb9ca26fc831a27ba4accbdcf0db9836fe9dc3239"} err="failed to get container status \"7595e3dd8b91632dba8608cdb9ca26fc831a27ba4accbdcf0db9836fe9dc3239\": rpc error: code = NotFound desc = could not find container \"7595e3dd8b91632dba8608cdb9ca26fc831a27ba4accbdcf0db9836fe9dc3239\": container with ID starting with 7595e3dd8b91632dba8608cdb9ca26fc831a27ba4accbdcf0db9836fe9dc3239 not found: ID does not exist" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.729832 4832 scope.go:117] "RemoveContainer" containerID="8a2f62fbd0eefa26aef25c33a1cc34d98b9c59ecdcf00857f337d19b07ca305c" Oct 02 19:39:23 crc kubenswrapper[4832]: E1002 19:39:23.730114 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a2f62fbd0eefa26aef25c33a1cc34d98b9c59ecdcf00857f337d19b07ca305c\": container with ID starting with 8a2f62fbd0eefa26aef25c33a1cc34d98b9c59ecdcf00857f337d19b07ca305c not found: ID does not exist" containerID="8a2f62fbd0eefa26aef25c33a1cc34d98b9c59ecdcf00857f337d19b07ca305c" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.730135 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2f62fbd0eefa26aef25c33a1cc34d98b9c59ecdcf00857f337d19b07ca305c"} err="failed to get container status \"8a2f62fbd0eefa26aef25c33a1cc34d98b9c59ecdcf00857f337d19b07ca305c\": rpc error: code = NotFound desc = could not find container \"8a2f62fbd0eefa26aef25c33a1cc34d98b9c59ecdcf00857f337d19b07ca305c\": container with ID starting with 8a2f62fbd0eefa26aef25c33a1cc34d98b9c59ecdcf00857f337d19b07ca305c not found: ID does not exist" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.730149 4832 scope.go:117] "RemoveContainer" containerID="ba5629c6e01a1cbd16af02a85c3aceca95ad3d4596f35c3de4c64abb69f803b9" Oct 02 19:39:23 crc kubenswrapper[4832]: E1002 19:39:23.730506 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba5629c6e01a1cbd16af02a85c3aceca95ad3d4596f35c3de4c64abb69f803b9\": container with ID starting with ba5629c6e01a1cbd16af02a85c3aceca95ad3d4596f35c3de4c64abb69f803b9 not found: ID does not exist" containerID="ba5629c6e01a1cbd16af02a85c3aceca95ad3d4596f35c3de4c64abb69f803b9" Oct 02 19:39:23 crc kubenswrapper[4832]: I1002 19:39:23.730567 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba5629c6e01a1cbd16af02a85c3aceca95ad3d4596f35c3de4c64abb69f803b9"} err="failed to get container status \"ba5629c6e01a1cbd16af02a85c3aceca95ad3d4596f35c3de4c64abb69f803b9\": rpc error: code = NotFound desc = could not find container \"ba5629c6e01a1cbd16af02a85c3aceca95ad3d4596f35c3de4c64abb69f803b9\": container with ID starting with ba5629c6e01a1cbd16af02a85c3aceca95ad3d4596f35c3de4c64abb69f803b9 not found: ID does not exist" Oct 02 19:39:25 crc kubenswrapper[4832]: I1002 19:39:25.240195 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="371974f9-e3a1-4827-966c-4db01f0d3667" path="/var/lib/kubelet/pods/371974f9-e3a1-4827-966c-4db01f0d3667/volumes" Oct 02 19:39:25 crc kubenswrapper[4832]: I1002 19:39:25.243915 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hbbxl"] Oct 02 19:39:26 crc kubenswrapper[4832]: I1002 19:39:26.607084 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hbbxl" podUID="910ec153-249e-4497-a55b-97c0bbe34ca0" containerName="registry-server" containerID="cri-o://ac75fdd671452639ca9f02ff3fdf908192e42bd56a179fd1e13e1e8877417f78" gracePeriod=2 Oct 02 19:39:27 crc kubenswrapper[4832]: I1002 19:39:27.620770 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbbxl" event={"ID":"910ec153-249e-4497-a55b-97c0bbe34ca0","Type":"ContainerDied","Data":"ac75fdd671452639ca9f02ff3fdf908192e42bd56a179fd1e13e1e8877417f78"} Oct 02 19:39:27 crc kubenswrapper[4832]: I1002 19:39:27.620763 4832 generic.go:334] "Generic (PLEG): container finished" podID="910ec153-249e-4497-a55b-97c0bbe34ca0" containerID="ac75fdd671452639ca9f02ff3fdf908192e42bd56a179fd1e13e1e8877417f78" exitCode=0 Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.223177 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:39:28 crc kubenswrapper[4832]: E1002 19:39:28.223875 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.316755 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.438235 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/910ec153-249e-4497-a55b-97c0bbe34ca0-utilities\") pod \"910ec153-249e-4497-a55b-97c0bbe34ca0\" (UID: \"910ec153-249e-4497-a55b-97c0bbe34ca0\") " Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.438569 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94r44\" (UniqueName: \"kubernetes.io/projected/910ec153-249e-4497-a55b-97c0bbe34ca0-kube-api-access-94r44\") pod \"910ec153-249e-4497-a55b-97c0bbe34ca0\" (UID: \"910ec153-249e-4497-a55b-97c0bbe34ca0\") " Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.438705 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/910ec153-249e-4497-a55b-97c0bbe34ca0-catalog-content\") pod \"910ec153-249e-4497-a55b-97c0bbe34ca0\" (UID: \"910ec153-249e-4497-a55b-97c0bbe34ca0\") " Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.439493 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/910ec153-249e-4497-a55b-97c0bbe34ca0-utilities" (OuterVolumeSpecName: "utilities") pod "910ec153-249e-4497-a55b-97c0bbe34ca0" (UID: "910ec153-249e-4497-a55b-97c0bbe34ca0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.444799 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/910ec153-249e-4497-a55b-97c0bbe34ca0-kube-api-access-94r44" (OuterVolumeSpecName: "kube-api-access-94r44") pod "910ec153-249e-4497-a55b-97c0bbe34ca0" (UID: "910ec153-249e-4497-a55b-97c0bbe34ca0"). InnerVolumeSpecName "kube-api-access-94r44". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.494734 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/910ec153-249e-4497-a55b-97c0bbe34ca0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "910ec153-249e-4497-a55b-97c0bbe34ca0" (UID: "910ec153-249e-4497-a55b-97c0bbe34ca0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.542393 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94r44\" (UniqueName: \"kubernetes.io/projected/910ec153-249e-4497-a55b-97c0bbe34ca0-kube-api-access-94r44\") on node \"crc\" DevicePath \"\"" Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.542436 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/910ec153-249e-4497-a55b-97c0bbe34ca0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.542450 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/910ec153-249e-4497-a55b-97c0bbe34ca0-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.632523 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbbxl" event={"ID":"910ec153-249e-4497-a55b-97c0bbe34ca0","Type":"ContainerDied","Data":"17d98b2bdb9273444c71802c1e5db924701755aa3130bc91e2bc8182460d32e3"} Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.632585 4832 scope.go:117] "RemoveContainer" containerID="ac75fdd671452639ca9f02ff3fdf908192e42bd56a179fd1e13e1e8877417f78" Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.632600 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbbxl" Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.671236 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hbbxl"] Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.672331 4832 scope.go:117] "RemoveContainer" containerID="7093d484a80b091e474b19ede6a8847fcdccd7dff1bea8ceabb65816e6f0ae0f" Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.683466 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hbbxl"] Oct 02 19:39:28 crc kubenswrapper[4832]: I1002 19:39:28.694144 4832 scope.go:117] "RemoveContainer" containerID="f8c0c0744ac0a7a6e77938e77a75285b7abfca02de72f728e58e03871453c499" Oct 02 19:39:29 crc kubenswrapper[4832]: I1002 19:39:29.240711 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="910ec153-249e-4497-a55b-97c0bbe34ca0" path="/var/lib/kubelet/pods/910ec153-249e-4497-a55b-97c0bbe34ca0/volumes" Oct 02 19:39:30 crc kubenswrapper[4832]: I1002 19:39:30.273478 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:30 crc kubenswrapper[4832]: I1002 19:39:30.331327 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:31 crc kubenswrapper[4832]: I1002 19:39:31.241617 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jzdzl"] Oct 02 19:39:31 crc kubenswrapper[4832]: I1002 19:39:31.691656 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jzdzl" podUID="deaff448-1773-41bb-a28d-1f54ad88c5f1" containerName="registry-server" containerID="cri-o://9fe16eb045575a425f1ada3462024bbc3abac7d33beda55854df3a7159990d0b" gracePeriod=2 Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.319454 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.443478 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deaff448-1773-41bb-a28d-1f54ad88c5f1-catalog-content\") pod \"deaff448-1773-41bb-a28d-1f54ad88c5f1\" (UID: \"deaff448-1773-41bb-a28d-1f54ad88c5f1\") " Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.443982 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsng9\" (UniqueName: \"kubernetes.io/projected/deaff448-1773-41bb-a28d-1f54ad88c5f1-kube-api-access-lsng9\") pod \"deaff448-1773-41bb-a28d-1f54ad88c5f1\" (UID: \"deaff448-1773-41bb-a28d-1f54ad88c5f1\") " Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.444915 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deaff448-1773-41bb-a28d-1f54ad88c5f1-utilities\") pod \"deaff448-1773-41bb-a28d-1f54ad88c5f1\" (UID: \"deaff448-1773-41bb-a28d-1f54ad88c5f1\") " Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.445645 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deaff448-1773-41bb-a28d-1f54ad88c5f1-utilities" (OuterVolumeSpecName: "utilities") pod "deaff448-1773-41bb-a28d-1f54ad88c5f1" (UID: "deaff448-1773-41bb-a28d-1f54ad88c5f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.452681 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deaff448-1773-41bb-a28d-1f54ad88c5f1-kube-api-access-lsng9" (OuterVolumeSpecName: "kube-api-access-lsng9") pod "deaff448-1773-41bb-a28d-1f54ad88c5f1" (UID: "deaff448-1773-41bb-a28d-1f54ad88c5f1"). InnerVolumeSpecName "kube-api-access-lsng9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.506562 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deaff448-1773-41bb-a28d-1f54ad88c5f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "deaff448-1773-41bb-a28d-1f54ad88c5f1" (UID: "deaff448-1773-41bb-a28d-1f54ad88c5f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.547816 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deaff448-1773-41bb-a28d-1f54ad88c5f1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.548036 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsng9\" (UniqueName: \"kubernetes.io/projected/deaff448-1773-41bb-a28d-1f54ad88c5f1-kube-api-access-lsng9\") on node \"crc\" DevicePath \"\"" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.548102 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deaff448-1773-41bb-a28d-1f54ad88c5f1-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.707885 4832 generic.go:334] "Generic (PLEG): container finished" podID="deaff448-1773-41bb-a28d-1f54ad88c5f1" containerID="9fe16eb045575a425f1ada3462024bbc3abac7d33beda55854df3a7159990d0b" exitCode=0 Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.707935 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzdzl" event={"ID":"deaff448-1773-41bb-a28d-1f54ad88c5f1","Type":"ContainerDied","Data":"9fe16eb045575a425f1ada3462024bbc3abac7d33beda55854df3a7159990d0b"} Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.707967 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzdzl" event={"ID":"deaff448-1773-41bb-a28d-1f54ad88c5f1","Type":"ContainerDied","Data":"74f18264e3b113cf90f41483d6a510033233b75b29afc379373abcc40705aa67"} Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.707988 4832 scope.go:117] "RemoveContainer" containerID="9fe16eb045575a425f1ada3462024bbc3abac7d33beda55854df3a7159990d0b" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.708006 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzdzl" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.736997 4832 scope.go:117] "RemoveContainer" containerID="6fbbf70ce6064ee385a186169433e4cbca953938b55fe186813b7f07117d2347" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.763702 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jzdzl"] Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.775459 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jzdzl"] Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.792961 4832 scope.go:117] "RemoveContainer" containerID="e97e1961c6643409cabb1f994483c0a8903320dc099503da0acc50daf1f69f9f" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.832993 4832 scope.go:117] "RemoveContainer" containerID="9fe16eb045575a425f1ada3462024bbc3abac7d33beda55854df3a7159990d0b" Oct 02 19:39:32 crc kubenswrapper[4832]: E1002 19:39:32.833417 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fe16eb045575a425f1ada3462024bbc3abac7d33beda55854df3a7159990d0b\": container with ID starting with 9fe16eb045575a425f1ada3462024bbc3abac7d33beda55854df3a7159990d0b not found: ID does not exist" containerID="9fe16eb045575a425f1ada3462024bbc3abac7d33beda55854df3a7159990d0b" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.833447 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe16eb045575a425f1ada3462024bbc3abac7d33beda55854df3a7159990d0b"} err="failed to get container status \"9fe16eb045575a425f1ada3462024bbc3abac7d33beda55854df3a7159990d0b\": rpc error: code = NotFound desc = could not find container \"9fe16eb045575a425f1ada3462024bbc3abac7d33beda55854df3a7159990d0b\": container with ID starting with 9fe16eb045575a425f1ada3462024bbc3abac7d33beda55854df3a7159990d0b not found: ID does not exist" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.833468 4832 scope.go:117] "RemoveContainer" containerID="6fbbf70ce6064ee385a186169433e4cbca953938b55fe186813b7f07117d2347" Oct 02 19:39:32 crc kubenswrapper[4832]: E1002 19:39:32.833837 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fbbf70ce6064ee385a186169433e4cbca953938b55fe186813b7f07117d2347\": container with ID starting with 6fbbf70ce6064ee385a186169433e4cbca953938b55fe186813b7f07117d2347 not found: ID does not exist" containerID="6fbbf70ce6064ee385a186169433e4cbca953938b55fe186813b7f07117d2347" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.833899 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbbf70ce6064ee385a186169433e4cbca953938b55fe186813b7f07117d2347"} err="failed to get container status \"6fbbf70ce6064ee385a186169433e4cbca953938b55fe186813b7f07117d2347\": rpc error: code = NotFound desc = could not find container \"6fbbf70ce6064ee385a186169433e4cbca953938b55fe186813b7f07117d2347\": container with ID starting with 6fbbf70ce6064ee385a186169433e4cbca953938b55fe186813b7f07117d2347 not found: ID does not exist" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.833932 4832 scope.go:117] "RemoveContainer" containerID="e97e1961c6643409cabb1f994483c0a8903320dc099503da0acc50daf1f69f9f" Oct 02 19:39:32 crc kubenswrapper[4832]: E1002 19:39:32.834232 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e97e1961c6643409cabb1f994483c0a8903320dc099503da0acc50daf1f69f9f\": container with ID starting with e97e1961c6643409cabb1f994483c0a8903320dc099503da0acc50daf1f69f9f not found: ID does not exist" containerID="e97e1961c6643409cabb1f994483c0a8903320dc099503da0acc50daf1f69f9f" Oct 02 19:39:32 crc kubenswrapper[4832]: I1002 19:39:32.834269 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e97e1961c6643409cabb1f994483c0a8903320dc099503da0acc50daf1f69f9f"} err="failed to get container status \"e97e1961c6643409cabb1f994483c0a8903320dc099503da0acc50daf1f69f9f\": rpc error: code = NotFound desc = could not find container \"e97e1961c6643409cabb1f994483c0a8903320dc099503da0acc50daf1f69f9f\": container with ID starting with e97e1961c6643409cabb1f994483c0a8903320dc099503da0acc50daf1f69f9f not found: ID does not exist" Oct 02 19:39:33 crc kubenswrapper[4832]: I1002 19:39:33.240053 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deaff448-1773-41bb-a28d-1f54ad88c5f1" path="/var/lib/kubelet/pods/deaff448-1773-41bb-a28d-1f54ad88c5f1/volumes" Oct 02 19:39:41 crc kubenswrapper[4832]: I1002 19:39:41.222870 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:39:41 crc kubenswrapper[4832]: E1002 19:39:41.225035 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.651556 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Oct 02 19:39:48 crc kubenswrapper[4832]: E1002 19:39:48.652603 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deaff448-1773-41bb-a28d-1f54ad88c5f1" containerName="extract-utilities" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.652621 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="deaff448-1773-41bb-a28d-1f54ad88c5f1" containerName="extract-utilities" Oct 02 19:39:48 crc kubenswrapper[4832]: E1002 19:39:48.652635 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="910ec153-249e-4497-a55b-97c0bbe34ca0" containerName="extract-content" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.652642 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="910ec153-249e-4497-a55b-97c0bbe34ca0" containerName="extract-content" Oct 02 19:39:48 crc kubenswrapper[4832]: E1002 19:39:48.652658 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="910ec153-249e-4497-a55b-97c0bbe34ca0" containerName="extract-utilities" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.652666 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="910ec153-249e-4497-a55b-97c0bbe34ca0" containerName="extract-utilities" Oct 02 19:39:48 crc kubenswrapper[4832]: E1002 19:39:48.652681 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deaff448-1773-41bb-a28d-1f54ad88c5f1" containerName="registry-server" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.652688 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="deaff448-1773-41bb-a28d-1f54ad88c5f1" containerName="registry-server" Oct 02 19:39:48 crc kubenswrapper[4832]: E1002 19:39:48.652704 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="371974f9-e3a1-4827-966c-4db01f0d3667" containerName="registry-server" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.652711 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="371974f9-e3a1-4827-966c-4db01f0d3667" containerName="registry-server" Oct 02 19:39:48 crc kubenswrapper[4832]: E1002 19:39:48.652743 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="371974f9-e3a1-4827-966c-4db01f0d3667" containerName="extract-utilities" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.652750 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="371974f9-e3a1-4827-966c-4db01f0d3667" containerName="extract-utilities" Oct 02 19:39:48 crc kubenswrapper[4832]: E1002 19:39:48.652768 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="910ec153-249e-4497-a55b-97c0bbe34ca0" containerName="registry-server" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.652774 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="910ec153-249e-4497-a55b-97c0bbe34ca0" containerName="registry-server" Oct 02 19:39:48 crc kubenswrapper[4832]: E1002 19:39:48.652799 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deaff448-1773-41bb-a28d-1f54ad88c5f1" containerName="extract-content" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.652806 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="deaff448-1773-41bb-a28d-1f54ad88c5f1" containerName="extract-content" Oct 02 19:39:48 crc kubenswrapper[4832]: E1002 19:39:48.652818 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="371974f9-e3a1-4827-966c-4db01f0d3667" containerName="extract-content" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.652825 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="371974f9-e3a1-4827-966c-4db01f0d3667" containerName="extract-content" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.653089 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="910ec153-249e-4497-a55b-97c0bbe34ca0" containerName="registry-server" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.653110 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="371974f9-e3a1-4827-966c-4db01f0d3667" containerName="registry-server" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.653142 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="deaff448-1773-41bb-a28d-1f54ad88c5f1" containerName="registry-server" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.654202 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.661475 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-2rdcd" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.661936 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.662432 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.662801 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.679444 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.768397 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/040c96d0-9636-499a-9986-fb79a73e7b2d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.768676 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/040c96d0-9636-499a-9986-fb79a73e7b2d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.768730 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/040c96d0-9636-499a-9986-fb79a73e7b2d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.768769 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/040c96d0-9636-499a-9986-fb79a73e7b2d-config-data\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.768795 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.768942 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.769002 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.769052 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99f2k\" (UniqueName: \"kubernetes.io/projected/040c96d0-9636-499a-9986-fb79a73e7b2d-kube-api-access-99f2k\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.769078 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.871553 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/040c96d0-9636-499a-9986-fb79a73e7b2d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.871614 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/040c96d0-9636-499a-9986-fb79a73e7b2d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.871655 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/040c96d0-9636-499a-9986-fb79a73e7b2d-config-data\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.871676 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.871709 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.871728 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.871760 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99f2k\" (UniqueName: \"kubernetes.io/projected/040c96d0-9636-499a-9986-fb79a73e7b2d-kube-api-access-99f2k\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.871782 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.871803 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/040c96d0-9636-499a-9986-fb79a73e7b2d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.872144 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/040c96d0-9636-499a-9986-fb79a73e7b2d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.872805 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/040c96d0-9636-499a-9986-fb79a73e7b2d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.873411 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/040c96d0-9636-499a-9986-fb79a73e7b2d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.873586 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/040c96d0-9636-499a-9986-fb79a73e7b2d-config-data\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.874812 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.878683 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.879025 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.881742 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.893453 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99f2k\" (UniqueName: \"kubernetes.io/projected/040c96d0-9636-499a-9986-fb79a73e7b2d-kube-api-access-99f2k\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.918632 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " pod="openstack/tempest-tests-tempest" Oct 02 19:39:48 crc kubenswrapper[4832]: I1002 19:39:48.987857 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 02 19:39:49 crc kubenswrapper[4832]: I1002 19:39:49.526004 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 02 19:39:49 crc kubenswrapper[4832]: W1002 19:39:49.534861 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod040c96d0_9636_499a_9986_fb79a73e7b2d.slice/crio-91b6af882844a0b3f9893047e3a690125a378880fcc5e4eda614936c5b21b19d WatchSource:0}: Error finding container 91b6af882844a0b3f9893047e3a690125a378880fcc5e4eda614936c5b21b19d: Status 404 returned error can't find the container with id 91b6af882844a0b3f9893047e3a690125a378880fcc5e4eda614936c5b21b19d Oct 02 19:39:49 crc kubenswrapper[4832]: I1002 19:39:49.918685 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"040c96d0-9636-499a-9986-fb79a73e7b2d","Type":"ContainerStarted","Data":"91b6af882844a0b3f9893047e3a690125a378880fcc5e4eda614936c5b21b19d"} Oct 02 19:39:56 crc kubenswrapper[4832]: I1002 19:39:56.223412 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:39:56 crc kubenswrapper[4832]: E1002 19:39:56.224325 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:40:09 crc kubenswrapper[4832]: I1002 19:40:09.224557 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:40:09 crc kubenswrapper[4832]: E1002 19:40:09.225842 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:40:23 crc kubenswrapper[4832]: I1002 19:40:23.223197 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:40:23 crc kubenswrapper[4832]: E1002 19:40:23.224284 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:40:23 crc kubenswrapper[4832]: E1002 19:40:23.332492 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Oct 02 19:40:23 crc kubenswrapper[4832]: E1002 19:40:23.337106 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99f2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(040c96d0-9636-499a-9986-fb79a73e7b2d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 19:40:23 crc kubenswrapper[4832]: E1002 19:40:23.338355 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="040c96d0-9636-499a-9986-fb79a73e7b2d" Oct 02 19:40:23 crc kubenswrapper[4832]: E1002 19:40:23.360981 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="040c96d0-9636-499a-9986-fb79a73e7b2d" Oct 02 19:40:38 crc kubenswrapper[4832]: I1002 19:40:38.223379 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:40:38 crc kubenswrapper[4832]: E1002 19:40:38.224322 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:40:38 crc kubenswrapper[4832]: I1002 19:40:38.546677 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"040c96d0-9636-499a-9986-fb79a73e7b2d","Type":"ContainerStarted","Data":"1753d1db5ce6a3d635d6ce8c5d6f0839b1cc7b18732db5308835b482f6a812ef"} Oct 02 19:40:38 crc kubenswrapper[4832]: I1002 19:40:38.585485 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.191695123 podStartE2EDuration="51.585467158s" podCreationTimestamp="2025-10-02 19:39:47 +0000 UTC" firstStartedPulling="2025-10-02 19:39:49.53723698 +0000 UTC m=+4746.506679892" lastFinishedPulling="2025-10-02 19:40:35.931009055 +0000 UTC m=+4792.900451927" observedRunningTime="2025-10-02 19:40:38.581647849 +0000 UTC m=+4795.551090721" watchObservedRunningTime="2025-10-02 19:40:38.585467158 +0000 UTC m=+4795.554910030" Oct 02 19:40:49 crc kubenswrapper[4832]: I1002 19:40:49.223241 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:40:49 crc kubenswrapper[4832]: E1002 19:40:49.224366 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:41:01 crc kubenswrapper[4832]: I1002 19:41:01.223365 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:41:01 crc kubenswrapper[4832]: E1002 19:41:01.224156 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:41:12 crc kubenswrapper[4832]: I1002 19:41:12.222409 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:41:12 crc kubenswrapper[4832]: E1002 19:41:12.223017 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:41:25 crc kubenswrapper[4832]: I1002 19:41:25.223468 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:41:25 crc kubenswrapper[4832]: E1002 19:41:25.224217 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:41:36 crc kubenswrapper[4832]: I1002 19:41:36.222948 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:41:36 crc kubenswrapper[4832]: E1002 19:41:36.223972 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:41:51 crc kubenswrapper[4832]: I1002 19:41:51.223688 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:41:51 crc kubenswrapper[4832]: E1002 19:41:51.224475 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:42:04 crc kubenswrapper[4832]: I1002 19:42:04.224498 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:42:04 crc kubenswrapper[4832]: I1002 19:42:04.613144 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"19082400a879325a6f1492d474d87668a75d8d0ea380b058a21df0e84596bcae"} Oct 02 19:44:26 crc kubenswrapper[4832]: I1002 19:44:26.875676 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:44:26 crc kubenswrapper[4832]: I1002 19:44:26.879140 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:44:56 crc kubenswrapper[4832]: I1002 19:44:56.888631 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:44:56 crc kubenswrapper[4832]: I1002 19:44:56.889223 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.338572 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl"] Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.352665 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.373578 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl"] Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.392615 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.392802 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.480291 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aaad0068-4226-4a41-a115-3d574337b00b-secret-volume\") pod \"collect-profiles-29323905-j2wfl\" (UID: \"aaad0068-4226-4a41-a115-3d574337b00b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.480908 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqx7b\" (UniqueName: \"kubernetes.io/projected/aaad0068-4226-4a41-a115-3d574337b00b-kube-api-access-fqx7b\") pod \"collect-profiles-29323905-j2wfl\" (UID: \"aaad0068-4226-4a41-a115-3d574337b00b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.480998 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aaad0068-4226-4a41-a115-3d574337b00b-config-volume\") pod \"collect-profiles-29323905-j2wfl\" (UID: \"aaad0068-4226-4a41-a115-3d574337b00b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.583400 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqx7b\" (UniqueName: \"kubernetes.io/projected/aaad0068-4226-4a41-a115-3d574337b00b-kube-api-access-fqx7b\") pod \"collect-profiles-29323905-j2wfl\" (UID: \"aaad0068-4226-4a41-a115-3d574337b00b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.583519 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aaad0068-4226-4a41-a115-3d574337b00b-config-volume\") pod \"collect-profiles-29323905-j2wfl\" (UID: \"aaad0068-4226-4a41-a115-3d574337b00b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.583575 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aaad0068-4226-4a41-a115-3d574337b00b-secret-volume\") pod \"collect-profiles-29323905-j2wfl\" (UID: \"aaad0068-4226-4a41-a115-3d574337b00b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.584922 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aaad0068-4226-4a41-a115-3d574337b00b-config-volume\") pod \"collect-profiles-29323905-j2wfl\" (UID: \"aaad0068-4226-4a41-a115-3d574337b00b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.593931 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aaad0068-4226-4a41-a115-3d574337b00b-secret-volume\") pod \"collect-profiles-29323905-j2wfl\" (UID: \"aaad0068-4226-4a41-a115-3d574337b00b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.602372 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqx7b\" (UniqueName: \"kubernetes.io/projected/aaad0068-4226-4a41-a115-3d574337b00b-kube-api-access-fqx7b\") pod \"collect-profiles-29323905-j2wfl\" (UID: \"aaad0068-4226-4a41-a115-3d574337b00b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" Oct 02 19:45:00 crc kubenswrapper[4832]: I1002 19:45:00.706389 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" Oct 02 19:45:03 crc kubenswrapper[4832]: I1002 19:45:03.341753 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl"] Oct 02 19:45:03 crc kubenswrapper[4832]: W1002 19:45:03.378932 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaad0068_4226_4a41_a115_3d574337b00b.slice/crio-79e95e3fd77df69af65bfdc85b079b2ad8929e79591f0e3feab3b1a8c9d90015 WatchSource:0}: Error finding container 79e95e3fd77df69af65bfdc85b079b2ad8929e79591f0e3feab3b1a8c9d90015: Status 404 returned error can't find the container with id 79e95e3fd77df69af65bfdc85b079b2ad8929e79591f0e3feab3b1a8c9d90015 Oct 02 19:45:03 crc kubenswrapper[4832]: I1002 19:45:03.788831 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" event={"ID":"aaad0068-4226-4a41-a115-3d574337b00b","Type":"ContainerStarted","Data":"82c2a4a706e7367a3f4ad85273545a75c065718529d2e6c7063825a1a95d751a"} Oct 02 19:45:03 crc kubenswrapper[4832]: I1002 19:45:03.789179 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" event={"ID":"aaad0068-4226-4a41-a115-3d574337b00b","Type":"ContainerStarted","Data":"79e95e3fd77df69af65bfdc85b079b2ad8929e79591f0e3feab3b1a8c9d90015"} Oct 02 19:45:03 crc kubenswrapper[4832]: I1002 19:45:03.812728 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" podStartSLOduration=3.812705423 podStartE2EDuration="3.812705423s" podCreationTimestamp="2025-10-02 19:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 19:45:03.812685533 +0000 UTC m=+5060.782128405" watchObservedRunningTime="2025-10-02 19:45:03.812705423 +0000 UTC m=+5060.782148295" Oct 02 19:45:04 crc kubenswrapper[4832]: I1002 19:45:04.801458 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" event={"ID":"aaad0068-4226-4a41-a115-3d574337b00b","Type":"ContainerDied","Data":"82c2a4a706e7367a3f4ad85273545a75c065718529d2e6c7063825a1a95d751a"} Oct 02 19:45:04 crc kubenswrapper[4832]: I1002 19:45:04.801835 4832 generic.go:334] "Generic (PLEG): container finished" podID="aaad0068-4226-4a41-a115-3d574337b00b" containerID="82c2a4a706e7367a3f4ad85273545a75c065718529d2e6c7063825a1a95d751a" exitCode=0 Oct 02 19:45:06 crc kubenswrapper[4832]: I1002 19:45:06.369077 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" Oct 02 19:45:06 crc kubenswrapper[4832]: I1002 19:45:06.450919 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aaad0068-4226-4a41-a115-3d574337b00b-secret-volume\") pod \"aaad0068-4226-4a41-a115-3d574337b00b\" (UID: \"aaad0068-4226-4a41-a115-3d574337b00b\") " Oct 02 19:45:06 crc kubenswrapper[4832]: I1002 19:45:06.451111 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqx7b\" (UniqueName: \"kubernetes.io/projected/aaad0068-4226-4a41-a115-3d574337b00b-kube-api-access-fqx7b\") pod \"aaad0068-4226-4a41-a115-3d574337b00b\" (UID: \"aaad0068-4226-4a41-a115-3d574337b00b\") " Oct 02 19:45:06 crc kubenswrapper[4832]: I1002 19:45:06.451334 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aaad0068-4226-4a41-a115-3d574337b00b-config-volume\") pod \"aaad0068-4226-4a41-a115-3d574337b00b\" (UID: \"aaad0068-4226-4a41-a115-3d574337b00b\") " Oct 02 19:45:06 crc kubenswrapper[4832]: I1002 19:45:06.454010 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaad0068-4226-4a41-a115-3d574337b00b-config-volume" (OuterVolumeSpecName: "config-volume") pod "aaad0068-4226-4a41-a115-3d574337b00b" (UID: "aaad0068-4226-4a41-a115-3d574337b00b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 19:45:06 crc kubenswrapper[4832]: I1002 19:45:06.461339 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaad0068-4226-4a41-a115-3d574337b00b-kube-api-access-fqx7b" (OuterVolumeSpecName: "kube-api-access-fqx7b") pod "aaad0068-4226-4a41-a115-3d574337b00b" (UID: "aaad0068-4226-4a41-a115-3d574337b00b"). InnerVolumeSpecName "kube-api-access-fqx7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:45:06 crc kubenswrapper[4832]: I1002 19:45:06.461455 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaad0068-4226-4a41-a115-3d574337b00b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aaad0068-4226-4a41-a115-3d574337b00b" (UID: "aaad0068-4226-4a41-a115-3d574337b00b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:45:06 crc kubenswrapper[4832]: I1002 19:45:06.554776 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aaad0068-4226-4a41-a115-3d574337b00b-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 02 19:45:06 crc kubenswrapper[4832]: I1002 19:45:06.554809 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqx7b\" (UniqueName: \"kubernetes.io/projected/aaad0068-4226-4a41-a115-3d574337b00b-kube-api-access-fqx7b\") on node \"crc\" DevicePath \"\"" Oct 02 19:45:06 crc kubenswrapper[4832]: I1002 19:45:06.554818 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aaad0068-4226-4a41-a115-3d574337b00b-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 19:45:06 crc kubenswrapper[4832]: I1002 19:45:06.821037 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" event={"ID":"aaad0068-4226-4a41-a115-3d574337b00b","Type":"ContainerDied","Data":"79e95e3fd77df69af65bfdc85b079b2ad8929e79591f0e3feab3b1a8c9d90015"} Oct 02 19:45:06 crc kubenswrapper[4832]: I1002 19:45:06.821118 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323905-j2wfl" Oct 02 19:45:06 crc kubenswrapper[4832]: I1002 19:45:06.842078 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79e95e3fd77df69af65bfdc85b079b2ad8929e79591f0e3feab3b1a8c9d90015" Oct 02 19:45:07 crc kubenswrapper[4832]: I1002 19:45:07.461595 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b"] Oct 02 19:45:07 crc kubenswrapper[4832]: I1002 19:45:07.470779 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323860-2t96b"] Oct 02 19:45:09 crc kubenswrapper[4832]: I1002 19:45:09.238765 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f89ed500-6e46-4b64-bae2-601ca08b5174" path="/var/lib/kubelet/pods/f89ed500-6e46-4b64-bae2-601ca08b5174/volumes" Oct 02 19:45:23 crc kubenswrapper[4832]: I1002 19:45:23.532651 4832 scope.go:117] "RemoveContainer" containerID="9133a8d46d4ab840ebca7eb74152768347922dad93e8ea6c8263be938a272eff" Oct 02 19:45:26 crc kubenswrapper[4832]: I1002 19:45:26.875270 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:45:26 crc kubenswrapper[4832]: I1002 19:45:26.876934 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:45:26 crc kubenswrapper[4832]: I1002 19:45:26.877069 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 19:45:26 crc kubenswrapper[4832]: I1002 19:45:26.878014 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19082400a879325a6f1492d474d87668a75d8d0ea380b058a21df0e84596bcae"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 19:45:26 crc kubenswrapper[4832]: I1002 19:45:26.878187 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://19082400a879325a6f1492d474d87668a75d8d0ea380b058a21df0e84596bcae" gracePeriod=600 Oct 02 19:45:27 crc kubenswrapper[4832]: I1002 19:45:27.053800 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="19082400a879325a6f1492d474d87668a75d8d0ea380b058a21df0e84596bcae" exitCode=0 Oct 02 19:45:27 crc kubenswrapper[4832]: I1002 19:45:27.053869 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"19082400a879325a6f1492d474d87668a75d8d0ea380b058a21df0e84596bcae"} Oct 02 19:45:27 crc kubenswrapper[4832]: I1002 19:45:27.054125 4832 scope.go:117] "RemoveContainer" containerID="0f814c55cc3b32ffedbe18b66855b4034167ebd49d220c3c19ca4e4bbf1c64e0" Oct 02 19:45:28 crc kubenswrapper[4832]: I1002 19:45:28.065911 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4"} Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.418446 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tp7k8"] Oct 02 19:46:11 crc kubenswrapper[4832]: E1002 19:46:11.419257 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaad0068-4226-4a41-a115-3d574337b00b" containerName="collect-profiles" Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.419281 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaad0068-4226-4a41-a115-3d574337b00b" containerName="collect-profiles" Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.419622 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaad0068-4226-4a41-a115-3d574337b00b" containerName="collect-profiles" Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.421672 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.436390 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tp7k8"] Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.506610 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-catalog-content\") pod \"redhat-operators-tp7k8\" (UID: \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\") " pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.506661 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-utilities\") pod \"redhat-operators-tp7k8\" (UID: \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\") " pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.506922 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5q99\" (UniqueName: \"kubernetes.io/projected/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-kube-api-access-n5q99\") pod \"redhat-operators-tp7k8\" (UID: \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\") " pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.609851 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5q99\" (UniqueName: \"kubernetes.io/projected/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-kube-api-access-n5q99\") pod \"redhat-operators-tp7k8\" (UID: \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\") " pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.610042 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-catalog-content\") pod \"redhat-operators-tp7k8\" (UID: \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\") " pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.610084 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-utilities\") pod \"redhat-operators-tp7k8\" (UID: \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\") " pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.610668 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-utilities\") pod \"redhat-operators-tp7k8\" (UID: \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\") " pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.610939 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-catalog-content\") pod \"redhat-operators-tp7k8\" (UID: \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\") " pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.635321 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5q99\" (UniqueName: \"kubernetes.io/projected/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-kube-api-access-n5q99\") pod \"redhat-operators-tp7k8\" (UID: \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\") " pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:11 crc kubenswrapper[4832]: I1002 19:46:11.743051 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:12 crc kubenswrapper[4832]: I1002 19:46:12.230794 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tp7k8"] Oct 02 19:46:12 crc kubenswrapper[4832]: I1002 19:46:12.540484 4832 generic.go:334] "Generic (PLEG): container finished" podID="b8e0d595-bd53-495c-9e1c-e5acc862e2cc" containerID="7093db85527bcac94c6878696c226f4b24b5758ae5f1de1608056d4023960a07" exitCode=0 Oct 02 19:46:12 crc kubenswrapper[4832]: I1002 19:46:12.540593 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp7k8" event={"ID":"b8e0d595-bd53-495c-9e1c-e5acc862e2cc","Type":"ContainerDied","Data":"7093db85527bcac94c6878696c226f4b24b5758ae5f1de1608056d4023960a07"} Oct 02 19:46:12 crc kubenswrapper[4832]: I1002 19:46:12.540812 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp7k8" event={"ID":"b8e0d595-bd53-495c-9e1c-e5acc862e2cc","Type":"ContainerStarted","Data":"e1c7e3881f4bc05f656dce9d455e124f60af97fbd603b71a87e4fc4f6a630f51"} Oct 02 19:46:12 crc kubenswrapper[4832]: I1002 19:46:12.545618 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 19:46:14 crc kubenswrapper[4832]: I1002 19:46:14.574562 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp7k8" event={"ID":"b8e0d595-bd53-495c-9e1c-e5acc862e2cc","Type":"ContainerStarted","Data":"8e209034f603dbecde71c08070b295745a4344bf1cab354b7a1bdb520c15fbe5"} Oct 02 19:46:18 crc kubenswrapper[4832]: I1002 19:46:18.620319 4832 generic.go:334] "Generic (PLEG): container finished" podID="b8e0d595-bd53-495c-9e1c-e5acc862e2cc" containerID="8e209034f603dbecde71c08070b295745a4344bf1cab354b7a1bdb520c15fbe5" exitCode=0 Oct 02 19:46:18 crc kubenswrapper[4832]: I1002 19:46:18.620829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp7k8" event={"ID":"b8e0d595-bd53-495c-9e1c-e5acc862e2cc","Type":"ContainerDied","Data":"8e209034f603dbecde71c08070b295745a4344bf1cab354b7a1bdb520c15fbe5"} Oct 02 19:46:19 crc kubenswrapper[4832]: I1002 19:46:19.634103 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp7k8" event={"ID":"b8e0d595-bd53-495c-9e1c-e5acc862e2cc","Type":"ContainerStarted","Data":"c7995c6d4f285b83e830a44f734014770abc618fa0af66940bdca6d59460700c"} Oct 02 19:46:19 crc kubenswrapper[4832]: I1002 19:46:19.656086 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tp7k8" podStartSLOduration=2.09172148 podStartE2EDuration="8.656065218s" podCreationTimestamp="2025-10-02 19:46:11 +0000 UTC" firstStartedPulling="2025-10-02 19:46:12.54237731 +0000 UTC m=+5129.511820182" lastFinishedPulling="2025-10-02 19:46:19.106721048 +0000 UTC m=+5136.076163920" observedRunningTime="2025-10-02 19:46:19.652348873 +0000 UTC m=+5136.621791765" watchObservedRunningTime="2025-10-02 19:46:19.656065218 +0000 UTC m=+5136.625508090" Oct 02 19:46:21 crc kubenswrapper[4832]: I1002 19:46:21.743498 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:21 crc kubenswrapper[4832]: I1002 19:46:21.743965 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:22 crc kubenswrapper[4832]: I1002 19:46:22.826313 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tp7k8" podUID="b8e0d595-bd53-495c-9e1c-e5acc862e2cc" containerName="registry-server" probeResult="failure" output=< Oct 02 19:46:22 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 19:46:22 crc kubenswrapper[4832]: > Oct 02 19:46:32 crc kubenswrapper[4832]: I1002 19:46:32.947302 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tp7k8" podUID="b8e0d595-bd53-495c-9e1c-e5acc862e2cc" containerName="registry-server" probeResult="failure" output=< Oct 02 19:46:32 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 19:46:32 crc kubenswrapper[4832]: > Oct 02 19:46:41 crc kubenswrapper[4832]: I1002 19:46:41.826073 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:41 crc kubenswrapper[4832]: I1002 19:46:41.891722 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:42 crc kubenswrapper[4832]: I1002 19:46:42.627066 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tp7k8"] Oct 02 19:46:42 crc kubenswrapper[4832]: I1002 19:46:42.908540 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tp7k8" podUID="b8e0d595-bd53-495c-9e1c-e5acc862e2cc" containerName="registry-server" containerID="cri-o://c7995c6d4f285b83e830a44f734014770abc618fa0af66940bdca6d59460700c" gracePeriod=2 Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.521689 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.643093 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-utilities\") pod \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\" (UID: \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\") " Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.643455 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-catalog-content\") pod \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\" (UID: \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\") " Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.643602 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5q99\" (UniqueName: \"kubernetes.io/projected/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-kube-api-access-n5q99\") pod \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\" (UID: \"b8e0d595-bd53-495c-9e1c-e5acc862e2cc\") " Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.644562 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-utilities" (OuterVolumeSpecName: "utilities") pod "b8e0d595-bd53-495c-9e1c-e5acc862e2cc" (UID: "b8e0d595-bd53-495c-9e1c-e5acc862e2cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.649816 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.669739 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-kube-api-access-n5q99" (OuterVolumeSpecName: "kube-api-access-n5q99") pod "b8e0d595-bd53-495c-9e1c-e5acc862e2cc" (UID: "b8e0d595-bd53-495c-9e1c-e5acc862e2cc"). InnerVolumeSpecName "kube-api-access-n5q99". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.751468 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8e0d595-bd53-495c-9e1c-e5acc862e2cc" (UID: "b8e0d595-bd53-495c-9e1c-e5acc862e2cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.754224 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.754373 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5q99\" (UniqueName: \"kubernetes.io/projected/b8e0d595-bd53-495c-9e1c-e5acc862e2cc-kube-api-access-n5q99\") on node \"crc\" DevicePath \"\"" Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.925501 4832 generic.go:334] "Generic (PLEG): container finished" podID="b8e0d595-bd53-495c-9e1c-e5acc862e2cc" containerID="c7995c6d4f285b83e830a44f734014770abc618fa0af66940bdca6d59460700c" exitCode=0 Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.925596 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp7k8" event={"ID":"b8e0d595-bd53-495c-9e1c-e5acc862e2cc","Type":"ContainerDied","Data":"c7995c6d4f285b83e830a44f734014770abc618fa0af66940bdca6d59460700c"} Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.925659 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tp7k8" Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.925980 4832 scope.go:117] "RemoveContainer" containerID="c7995c6d4f285b83e830a44f734014770abc618fa0af66940bdca6d59460700c" Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.925960 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tp7k8" event={"ID":"b8e0d595-bd53-495c-9e1c-e5acc862e2cc","Type":"ContainerDied","Data":"e1c7e3881f4bc05f656dce9d455e124f60af97fbd603b71a87e4fc4f6a630f51"} Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.946979 4832 scope.go:117] "RemoveContainer" containerID="8e209034f603dbecde71c08070b295745a4344bf1cab354b7a1bdb520c15fbe5" Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.982597 4832 scope.go:117] "RemoveContainer" containerID="7093db85527bcac94c6878696c226f4b24b5758ae5f1de1608056d4023960a07" Oct 02 19:46:43 crc kubenswrapper[4832]: I1002 19:46:43.987042 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tp7k8"] Oct 02 19:46:44 crc kubenswrapper[4832]: I1002 19:46:44.000874 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tp7k8"] Oct 02 19:46:44 crc kubenswrapper[4832]: I1002 19:46:44.026423 4832 scope.go:117] "RemoveContainer" containerID="c7995c6d4f285b83e830a44f734014770abc618fa0af66940bdca6d59460700c" Oct 02 19:46:44 crc kubenswrapper[4832]: E1002 19:46:44.030456 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7995c6d4f285b83e830a44f734014770abc618fa0af66940bdca6d59460700c\": container with ID starting with c7995c6d4f285b83e830a44f734014770abc618fa0af66940bdca6d59460700c not found: ID does not exist" containerID="c7995c6d4f285b83e830a44f734014770abc618fa0af66940bdca6d59460700c" Oct 02 19:46:44 crc kubenswrapper[4832]: I1002 19:46:44.030520 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7995c6d4f285b83e830a44f734014770abc618fa0af66940bdca6d59460700c"} err="failed to get container status \"c7995c6d4f285b83e830a44f734014770abc618fa0af66940bdca6d59460700c\": rpc error: code = NotFound desc = could not find container \"c7995c6d4f285b83e830a44f734014770abc618fa0af66940bdca6d59460700c\": container with ID starting with c7995c6d4f285b83e830a44f734014770abc618fa0af66940bdca6d59460700c not found: ID does not exist" Oct 02 19:46:44 crc kubenswrapper[4832]: I1002 19:46:44.030571 4832 scope.go:117] "RemoveContainer" containerID="8e209034f603dbecde71c08070b295745a4344bf1cab354b7a1bdb520c15fbe5" Oct 02 19:46:44 crc kubenswrapper[4832]: E1002 19:46:44.031076 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e209034f603dbecde71c08070b295745a4344bf1cab354b7a1bdb520c15fbe5\": container with ID starting with 8e209034f603dbecde71c08070b295745a4344bf1cab354b7a1bdb520c15fbe5 not found: ID does not exist" containerID="8e209034f603dbecde71c08070b295745a4344bf1cab354b7a1bdb520c15fbe5" Oct 02 19:46:44 crc kubenswrapper[4832]: I1002 19:46:44.031124 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e209034f603dbecde71c08070b295745a4344bf1cab354b7a1bdb520c15fbe5"} err="failed to get container status \"8e209034f603dbecde71c08070b295745a4344bf1cab354b7a1bdb520c15fbe5\": rpc error: code = NotFound desc = could not find container \"8e209034f603dbecde71c08070b295745a4344bf1cab354b7a1bdb520c15fbe5\": container with ID starting with 8e209034f603dbecde71c08070b295745a4344bf1cab354b7a1bdb520c15fbe5 not found: ID does not exist" Oct 02 19:46:44 crc kubenswrapper[4832]: I1002 19:46:44.031155 4832 scope.go:117] "RemoveContainer" containerID="7093db85527bcac94c6878696c226f4b24b5758ae5f1de1608056d4023960a07" Oct 02 19:46:44 crc kubenswrapper[4832]: E1002 19:46:44.031612 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7093db85527bcac94c6878696c226f4b24b5758ae5f1de1608056d4023960a07\": container with ID starting with 7093db85527bcac94c6878696c226f4b24b5758ae5f1de1608056d4023960a07 not found: ID does not exist" containerID="7093db85527bcac94c6878696c226f4b24b5758ae5f1de1608056d4023960a07" Oct 02 19:46:44 crc kubenswrapper[4832]: I1002 19:46:44.031651 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7093db85527bcac94c6878696c226f4b24b5758ae5f1de1608056d4023960a07"} err="failed to get container status \"7093db85527bcac94c6878696c226f4b24b5758ae5f1de1608056d4023960a07\": rpc error: code = NotFound desc = could not find container \"7093db85527bcac94c6878696c226f4b24b5758ae5f1de1608056d4023960a07\": container with ID starting with 7093db85527bcac94c6878696c226f4b24b5758ae5f1de1608056d4023960a07 not found: ID does not exist" Oct 02 19:46:45 crc kubenswrapper[4832]: I1002 19:46:45.249414 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e0d595-bd53-495c-9e1c-e5acc862e2cc" path="/var/lib/kubelet/pods/b8e0d595-bd53-495c-9e1c-e5acc862e2cc/volumes" Oct 02 19:47:56 crc kubenswrapper[4832]: I1002 19:47:56.875308 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:47:56 crc kubenswrapper[4832]: I1002 19:47:56.875703 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:48:26 crc kubenswrapper[4832]: I1002 19:48:26.875862 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:48:26 crc kubenswrapper[4832]: I1002 19:48:26.876597 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:48:56 crc kubenswrapper[4832]: I1002 19:48:56.875536 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:48:56 crc kubenswrapper[4832]: I1002 19:48:56.875955 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:48:56 crc kubenswrapper[4832]: I1002 19:48:56.876003 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 19:48:56 crc kubenswrapper[4832]: I1002 19:48:56.876893 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 19:48:56 crc kubenswrapper[4832]: I1002 19:48:56.876958 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" gracePeriod=600 Oct 02 19:48:57 crc kubenswrapper[4832]: E1002 19:48:57.014153 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:48:57 crc kubenswrapper[4832]: I1002 19:48:57.530083 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" exitCode=0 Oct 02 19:48:57 crc kubenswrapper[4832]: I1002 19:48:57.530138 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4"} Oct 02 19:48:57 crc kubenswrapper[4832]: I1002 19:48:57.530178 4832 scope.go:117] "RemoveContainer" containerID="19082400a879325a6f1492d474d87668a75d8d0ea380b058a21df0e84596bcae" Oct 02 19:48:57 crc kubenswrapper[4832]: I1002 19:48:57.531727 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:48:57 crc kubenswrapper[4832]: E1002 19:48:57.532687 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:49:12 crc kubenswrapper[4832]: I1002 19:49:12.223045 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:49:12 crc kubenswrapper[4832]: E1002 19:49:12.223769 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:49:26 crc kubenswrapper[4832]: I1002 19:49:26.223547 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:49:26 crc kubenswrapper[4832]: E1002 19:49:26.224452 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.736328 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pthrp"] Oct 02 19:49:28 crc kubenswrapper[4832]: E1002 19:49:28.737938 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e0d595-bd53-495c-9e1c-e5acc862e2cc" containerName="extract-content" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.737974 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e0d595-bd53-495c-9e1c-e5acc862e2cc" containerName="extract-content" Oct 02 19:49:28 crc kubenswrapper[4832]: E1002 19:49:28.738000 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e0d595-bd53-495c-9e1c-e5acc862e2cc" containerName="extract-utilities" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.738007 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e0d595-bd53-495c-9e1c-e5acc862e2cc" containerName="extract-utilities" Oct 02 19:49:28 crc kubenswrapper[4832]: E1002 19:49:28.738043 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e0d595-bd53-495c-9e1c-e5acc862e2cc" containerName="registry-server" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.738049 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e0d595-bd53-495c-9e1c-e5acc862e2cc" containerName="registry-server" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.738336 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e0d595-bd53-495c-9e1c-e5acc862e2cc" containerName="registry-server" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.740102 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.749155 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pthrp"] Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.753110 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14048d5e-0eb6-4025-a02e-41cd3bcef786-utilities\") pod \"certified-operators-pthrp\" (UID: \"14048d5e-0eb6-4025-a02e-41cd3bcef786\") " pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.753185 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7rtz\" (UniqueName: \"kubernetes.io/projected/14048d5e-0eb6-4025-a02e-41cd3bcef786-kube-api-access-x7rtz\") pod \"certified-operators-pthrp\" (UID: \"14048d5e-0eb6-4025-a02e-41cd3bcef786\") " pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.753328 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14048d5e-0eb6-4025-a02e-41cd3bcef786-catalog-content\") pod \"certified-operators-pthrp\" (UID: \"14048d5e-0eb6-4025-a02e-41cd3bcef786\") " pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.855138 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14048d5e-0eb6-4025-a02e-41cd3bcef786-utilities\") pod \"certified-operators-pthrp\" (UID: \"14048d5e-0eb6-4025-a02e-41cd3bcef786\") " pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.855198 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7rtz\" (UniqueName: \"kubernetes.io/projected/14048d5e-0eb6-4025-a02e-41cd3bcef786-kube-api-access-x7rtz\") pod \"certified-operators-pthrp\" (UID: \"14048d5e-0eb6-4025-a02e-41cd3bcef786\") " pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.855288 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14048d5e-0eb6-4025-a02e-41cd3bcef786-catalog-content\") pod \"certified-operators-pthrp\" (UID: \"14048d5e-0eb6-4025-a02e-41cd3bcef786\") " pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.855853 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14048d5e-0eb6-4025-a02e-41cd3bcef786-utilities\") pod \"certified-operators-pthrp\" (UID: \"14048d5e-0eb6-4025-a02e-41cd3bcef786\") " pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.855927 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14048d5e-0eb6-4025-a02e-41cd3bcef786-catalog-content\") pod \"certified-operators-pthrp\" (UID: \"14048d5e-0eb6-4025-a02e-41cd3bcef786\") " pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:28 crc kubenswrapper[4832]: I1002 19:49:28.970186 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7rtz\" (UniqueName: \"kubernetes.io/projected/14048d5e-0eb6-4025-a02e-41cd3bcef786-kube-api-access-x7rtz\") pod \"certified-operators-pthrp\" (UID: \"14048d5e-0eb6-4025-a02e-41cd3bcef786\") " pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:29 crc kubenswrapper[4832]: I1002 19:49:29.069439 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:29 crc kubenswrapper[4832]: I1002 19:49:29.740018 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pthrp"] Oct 02 19:49:29 crc kubenswrapper[4832]: I1002 19:49:29.934706 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pthrp" event={"ID":"14048d5e-0eb6-4025-a02e-41cd3bcef786","Type":"ContainerStarted","Data":"dd5f62b31c326921e54720b642980ad388904b697202f7fb335515817403a0e2"} Oct 02 19:49:30 crc kubenswrapper[4832]: I1002 19:49:30.950614 4832 generic.go:334] "Generic (PLEG): container finished" podID="14048d5e-0eb6-4025-a02e-41cd3bcef786" containerID="b59064ac169e3c06e2081e6489a978d9948b012d79132f09ffef06c18c2f8109" exitCode=0 Oct 02 19:49:30 crc kubenswrapper[4832]: I1002 19:49:30.950717 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pthrp" event={"ID":"14048d5e-0eb6-4025-a02e-41cd3bcef786","Type":"ContainerDied","Data":"b59064ac169e3c06e2081e6489a978d9948b012d79132f09ffef06c18c2f8109"} Oct 02 19:49:32 crc kubenswrapper[4832]: I1002 19:49:32.975297 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pthrp" event={"ID":"14048d5e-0eb6-4025-a02e-41cd3bcef786","Type":"ContainerStarted","Data":"e2574a056185cdad6ed109c2fb0a7b396fa9082eed19812ee2790719b4561fe6"} Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.111468 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xxv6q"] Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.114514 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.153692 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxv6q"] Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.258776 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1338a990-9b63-4b6c-ad9c-40c95903dbde-catalog-content\") pod \"redhat-marketplace-xxv6q\" (UID: \"1338a990-9b63-4b6c-ad9c-40c95903dbde\") " pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.259198 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxwg7\" (UniqueName: \"kubernetes.io/projected/1338a990-9b63-4b6c-ad9c-40c95903dbde-kube-api-access-bxwg7\") pod \"redhat-marketplace-xxv6q\" (UID: \"1338a990-9b63-4b6c-ad9c-40c95903dbde\") " pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.259666 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1338a990-9b63-4b6c-ad9c-40c95903dbde-utilities\") pod \"redhat-marketplace-xxv6q\" (UID: \"1338a990-9b63-4b6c-ad9c-40c95903dbde\") " pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.361963 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1338a990-9b63-4b6c-ad9c-40c95903dbde-catalog-content\") pod \"redhat-marketplace-xxv6q\" (UID: \"1338a990-9b63-4b6c-ad9c-40c95903dbde\") " pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.362182 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxwg7\" (UniqueName: \"kubernetes.io/projected/1338a990-9b63-4b6c-ad9c-40c95903dbde-kube-api-access-bxwg7\") pod \"redhat-marketplace-xxv6q\" (UID: \"1338a990-9b63-4b6c-ad9c-40c95903dbde\") " pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.362300 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1338a990-9b63-4b6c-ad9c-40c95903dbde-utilities\") pod \"redhat-marketplace-xxv6q\" (UID: \"1338a990-9b63-4b6c-ad9c-40c95903dbde\") " pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.362645 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1338a990-9b63-4b6c-ad9c-40c95903dbde-catalog-content\") pod \"redhat-marketplace-xxv6q\" (UID: \"1338a990-9b63-4b6c-ad9c-40c95903dbde\") " pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.362791 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1338a990-9b63-4b6c-ad9c-40c95903dbde-utilities\") pod \"redhat-marketplace-xxv6q\" (UID: \"1338a990-9b63-4b6c-ad9c-40c95903dbde\") " pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.389152 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxwg7\" (UniqueName: \"kubernetes.io/projected/1338a990-9b63-4b6c-ad9c-40c95903dbde-kube-api-access-bxwg7\") pod \"redhat-marketplace-xxv6q\" (UID: \"1338a990-9b63-4b6c-ad9c-40c95903dbde\") " pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.503519 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.991042 4832 generic.go:334] "Generic (PLEG): container finished" podID="14048d5e-0eb6-4025-a02e-41cd3bcef786" containerID="e2574a056185cdad6ed109c2fb0a7b396fa9082eed19812ee2790719b4561fe6" exitCode=0 Oct 02 19:49:33 crc kubenswrapper[4832]: I1002 19:49:33.991097 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pthrp" event={"ID":"14048d5e-0eb6-4025-a02e-41cd3bcef786","Type":"ContainerDied","Data":"e2574a056185cdad6ed109c2fb0a7b396fa9082eed19812ee2790719b4561fe6"} Oct 02 19:49:34 crc kubenswrapper[4832]: I1002 19:49:34.019213 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxv6q"] Oct 02 19:49:35 crc kubenswrapper[4832]: I1002 19:49:35.003192 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pthrp" event={"ID":"14048d5e-0eb6-4025-a02e-41cd3bcef786","Type":"ContainerStarted","Data":"5d89b16260573b429ae40a9c3f4b708097c852fb7e52e170ded58bf70ee4b9b4"} Oct 02 19:49:35 crc kubenswrapper[4832]: I1002 19:49:35.004366 4832 generic.go:334] "Generic (PLEG): container finished" podID="1338a990-9b63-4b6c-ad9c-40c95903dbde" containerID="00f85f5b56f47c9b1324609102c7bc4e87810fc89ac5e83240a6c5fc79319aca" exitCode=0 Oct 02 19:49:35 crc kubenswrapper[4832]: I1002 19:49:35.004388 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxv6q" event={"ID":"1338a990-9b63-4b6c-ad9c-40c95903dbde","Type":"ContainerDied","Data":"00f85f5b56f47c9b1324609102c7bc4e87810fc89ac5e83240a6c5fc79319aca"} Oct 02 19:49:35 crc kubenswrapper[4832]: I1002 19:49:35.004402 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxv6q" event={"ID":"1338a990-9b63-4b6c-ad9c-40c95903dbde","Type":"ContainerStarted","Data":"7f62f864901718f690c276485758cac4455b539b4324f1bd0a485cc3888111d7"} Oct 02 19:49:35 crc kubenswrapper[4832]: I1002 19:49:35.026883 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pthrp" podStartSLOduration=3.5300716899999998 podStartE2EDuration="7.026864006s" podCreationTimestamp="2025-10-02 19:49:28 +0000 UTC" firstStartedPulling="2025-10-02 19:49:30.953754879 +0000 UTC m=+5327.923197751" lastFinishedPulling="2025-10-02 19:49:34.450547195 +0000 UTC m=+5331.419990067" observedRunningTime="2025-10-02 19:49:35.020763798 +0000 UTC m=+5331.990206670" watchObservedRunningTime="2025-10-02 19:49:35.026864006 +0000 UTC m=+5331.996306878" Oct 02 19:49:37 crc kubenswrapper[4832]: I1002 19:49:37.024819 4832 generic.go:334] "Generic (PLEG): container finished" podID="1338a990-9b63-4b6c-ad9c-40c95903dbde" containerID="125ef6c8401d61fb50fe8c869073ff964a697b52eef5d4c841c0c8c61b02effe" exitCode=0 Oct 02 19:49:37 crc kubenswrapper[4832]: I1002 19:49:37.025305 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxv6q" event={"ID":"1338a990-9b63-4b6c-ad9c-40c95903dbde","Type":"ContainerDied","Data":"125ef6c8401d61fb50fe8c869073ff964a697b52eef5d4c841c0c8c61b02effe"} Oct 02 19:49:37 crc kubenswrapper[4832]: I1002 19:49:37.223103 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:49:37 crc kubenswrapper[4832]: E1002 19:49:37.223577 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:49:38 crc kubenswrapper[4832]: I1002 19:49:38.044938 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxv6q" event={"ID":"1338a990-9b63-4b6c-ad9c-40c95903dbde","Type":"ContainerStarted","Data":"6ee9d2393b5f033e6f5520455d8485f9a0b4d00e0b2d16c963c794b8c1f4db9d"} Oct 02 19:49:38 crc kubenswrapper[4832]: I1002 19:49:38.075809 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xxv6q" podStartSLOduration=2.503892644 podStartE2EDuration="5.07578961s" podCreationTimestamp="2025-10-02 19:49:33 +0000 UTC" firstStartedPulling="2025-10-02 19:49:35.006028343 +0000 UTC m=+5331.975471215" lastFinishedPulling="2025-10-02 19:49:37.577925309 +0000 UTC m=+5334.547368181" observedRunningTime="2025-10-02 19:49:38.070632711 +0000 UTC m=+5335.040075593" watchObservedRunningTime="2025-10-02 19:49:38.07578961 +0000 UTC m=+5335.045232492" Oct 02 19:49:39 crc kubenswrapper[4832]: I1002 19:49:39.070353 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:39 crc kubenswrapper[4832]: I1002 19:49:39.070632 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:39 crc kubenswrapper[4832]: I1002 19:49:39.121323 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:40 crc kubenswrapper[4832]: I1002 19:49:40.148670 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:41 crc kubenswrapper[4832]: I1002 19:49:41.304742 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pthrp"] Oct 02 19:49:42 crc kubenswrapper[4832]: I1002 19:49:42.100796 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pthrp" podUID="14048d5e-0eb6-4025-a02e-41cd3bcef786" containerName="registry-server" containerID="cri-o://5d89b16260573b429ae40a9c3f4b708097c852fb7e52e170ded58bf70ee4b9b4" gracePeriod=2 Oct 02 19:49:42 crc kubenswrapper[4832]: I1002 19:49:42.731858 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:42 crc kubenswrapper[4832]: I1002 19:49:42.827891 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7rtz\" (UniqueName: \"kubernetes.io/projected/14048d5e-0eb6-4025-a02e-41cd3bcef786-kube-api-access-x7rtz\") pod \"14048d5e-0eb6-4025-a02e-41cd3bcef786\" (UID: \"14048d5e-0eb6-4025-a02e-41cd3bcef786\") " Oct 02 19:49:42 crc kubenswrapper[4832]: I1002 19:49:42.828375 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14048d5e-0eb6-4025-a02e-41cd3bcef786-utilities\") pod \"14048d5e-0eb6-4025-a02e-41cd3bcef786\" (UID: \"14048d5e-0eb6-4025-a02e-41cd3bcef786\") " Oct 02 19:49:42 crc kubenswrapper[4832]: I1002 19:49:42.828485 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14048d5e-0eb6-4025-a02e-41cd3bcef786-catalog-content\") pod \"14048d5e-0eb6-4025-a02e-41cd3bcef786\" (UID: \"14048d5e-0eb6-4025-a02e-41cd3bcef786\") " Oct 02 19:49:42 crc kubenswrapper[4832]: I1002 19:49:42.829542 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14048d5e-0eb6-4025-a02e-41cd3bcef786-utilities" (OuterVolumeSpecName: "utilities") pod "14048d5e-0eb6-4025-a02e-41cd3bcef786" (UID: "14048d5e-0eb6-4025-a02e-41cd3bcef786"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:49:42 crc kubenswrapper[4832]: I1002 19:49:42.833731 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14048d5e-0eb6-4025-a02e-41cd3bcef786-kube-api-access-x7rtz" (OuterVolumeSpecName: "kube-api-access-x7rtz") pod "14048d5e-0eb6-4025-a02e-41cd3bcef786" (UID: "14048d5e-0eb6-4025-a02e-41cd3bcef786"). InnerVolumeSpecName "kube-api-access-x7rtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:49:42 crc kubenswrapper[4832]: I1002 19:49:42.870151 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14048d5e-0eb6-4025-a02e-41cd3bcef786-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14048d5e-0eb6-4025-a02e-41cd3bcef786" (UID: "14048d5e-0eb6-4025-a02e-41cd3bcef786"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:49:42 crc kubenswrapper[4832]: I1002 19:49:42.931009 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7rtz\" (UniqueName: \"kubernetes.io/projected/14048d5e-0eb6-4025-a02e-41cd3bcef786-kube-api-access-x7rtz\") on node \"crc\" DevicePath \"\"" Oct 02 19:49:42 crc kubenswrapper[4832]: I1002 19:49:42.931052 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14048d5e-0eb6-4025-a02e-41cd3bcef786-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:49:42 crc kubenswrapper[4832]: I1002 19:49:42.931066 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14048d5e-0eb6-4025-a02e-41cd3bcef786-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.116969 4832 generic.go:334] "Generic (PLEG): container finished" podID="14048d5e-0eb6-4025-a02e-41cd3bcef786" containerID="5d89b16260573b429ae40a9c3f4b708097c852fb7e52e170ded58bf70ee4b9b4" exitCode=0 Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.117070 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pthrp" event={"ID":"14048d5e-0eb6-4025-a02e-41cd3bcef786","Type":"ContainerDied","Data":"5d89b16260573b429ae40a9c3f4b708097c852fb7e52e170ded58bf70ee4b9b4"} Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.117113 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pthrp" event={"ID":"14048d5e-0eb6-4025-a02e-41cd3bcef786","Type":"ContainerDied","Data":"dd5f62b31c326921e54720b642980ad388904b697202f7fb335515817403a0e2"} Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.117141 4832 scope.go:117] "RemoveContainer" containerID="5d89b16260573b429ae40a9c3f4b708097c852fb7e52e170ded58bf70ee4b9b4" Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.117328 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pthrp" Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.151562 4832 scope.go:117] "RemoveContainer" containerID="e2574a056185cdad6ed109c2fb0a7b396fa9082eed19812ee2790719b4561fe6" Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.171148 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pthrp"] Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.185956 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pthrp"] Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.186879 4832 scope.go:117] "RemoveContainer" containerID="b59064ac169e3c06e2081e6489a978d9948b012d79132f09ffef06c18c2f8109" Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.239071 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14048d5e-0eb6-4025-a02e-41cd3bcef786" path="/var/lib/kubelet/pods/14048d5e-0eb6-4025-a02e-41cd3bcef786/volumes" Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.260184 4832 scope.go:117] "RemoveContainer" containerID="5d89b16260573b429ae40a9c3f4b708097c852fb7e52e170ded58bf70ee4b9b4" Oct 02 19:49:43 crc kubenswrapper[4832]: E1002 19:49:43.260749 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d89b16260573b429ae40a9c3f4b708097c852fb7e52e170ded58bf70ee4b9b4\": container with ID starting with 5d89b16260573b429ae40a9c3f4b708097c852fb7e52e170ded58bf70ee4b9b4 not found: ID does not exist" containerID="5d89b16260573b429ae40a9c3f4b708097c852fb7e52e170ded58bf70ee4b9b4" Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.260806 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d89b16260573b429ae40a9c3f4b708097c852fb7e52e170ded58bf70ee4b9b4"} err="failed to get container status \"5d89b16260573b429ae40a9c3f4b708097c852fb7e52e170ded58bf70ee4b9b4\": rpc error: code = NotFound desc = could not find container \"5d89b16260573b429ae40a9c3f4b708097c852fb7e52e170ded58bf70ee4b9b4\": container with ID starting with 5d89b16260573b429ae40a9c3f4b708097c852fb7e52e170ded58bf70ee4b9b4 not found: ID does not exist" Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.260840 4832 scope.go:117] "RemoveContainer" containerID="e2574a056185cdad6ed109c2fb0a7b396fa9082eed19812ee2790719b4561fe6" Oct 02 19:49:43 crc kubenswrapper[4832]: E1002 19:49:43.261321 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2574a056185cdad6ed109c2fb0a7b396fa9082eed19812ee2790719b4561fe6\": container with ID starting with e2574a056185cdad6ed109c2fb0a7b396fa9082eed19812ee2790719b4561fe6 not found: ID does not exist" containerID="e2574a056185cdad6ed109c2fb0a7b396fa9082eed19812ee2790719b4561fe6" Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.261360 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2574a056185cdad6ed109c2fb0a7b396fa9082eed19812ee2790719b4561fe6"} err="failed to get container status \"e2574a056185cdad6ed109c2fb0a7b396fa9082eed19812ee2790719b4561fe6\": rpc error: code = NotFound desc = could not find container \"e2574a056185cdad6ed109c2fb0a7b396fa9082eed19812ee2790719b4561fe6\": container with ID starting with e2574a056185cdad6ed109c2fb0a7b396fa9082eed19812ee2790719b4561fe6 not found: ID does not exist" Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.261380 4832 scope.go:117] "RemoveContainer" containerID="b59064ac169e3c06e2081e6489a978d9948b012d79132f09ffef06c18c2f8109" Oct 02 19:49:43 crc kubenswrapper[4832]: E1002 19:49:43.261811 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b59064ac169e3c06e2081e6489a978d9948b012d79132f09ffef06c18c2f8109\": container with ID starting with b59064ac169e3c06e2081e6489a978d9948b012d79132f09ffef06c18c2f8109 not found: ID does not exist" containerID="b59064ac169e3c06e2081e6489a978d9948b012d79132f09ffef06c18c2f8109" Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.261861 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b59064ac169e3c06e2081e6489a978d9948b012d79132f09ffef06c18c2f8109"} err="failed to get container status \"b59064ac169e3c06e2081e6489a978d9948b012d79132f09ffef06c18c2f8109\": rpc error: code = NotFound desc = could not find container \"b59064ac169e3c06e2081e6489a978d9948b012d79132f09ffef06c18c2f8109\": container with ID starting with b59064ac169e3c06e2081e6489a978d9948b012d79132f09ffef06c18c2f8109 not found: ID does not exist" Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.503978 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.504036 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:43 crc kubenswrapper[4832]: I1002 19:49:43.562390 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:44 crc kubenswrapper[4832]: I1002 19:49:44.185256 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:45 crc kubenswrapper[4832]: I1002 19:49:45.699348 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxv6q"] Oct 02 19:49:46 crc kubenswrapper[4832]: I1002 19:49:46.161197 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xxv6q" podUID="1338a990-9b63-4b6c-ad9c-40c95903dbde" containerName="registry-server" containerID="cri-o://6ee9d2393b5f033e6f5520455d8485f9a0b4d00e0b2d16c963c794b8c1f4db9d" gracePeriod=2 Oct 02 19:49:46 crc kubenswrapper[4832]: I1002 19:49:46.802532 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:46 crc kubenswrapper[4832]: I1002 19:49:46.845817 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1338a990-9b63-4b6c-ad9c-40c95903dbde-catalog-content\") pod \"1338a990-9b63-4b6c-ad9c-40c95903dbde\" (UID: \"1338a990-9b63-4b6c-ad9c-40c95903dbde\") " Oct 02 19:49:46 crc kubenswrapper[4832]: I1002 19:49:46.845950 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxwg7\" (UniqueName: \"kubernetes.io/projected/1338a990-9b63-4b6c-ad9c-40c95903dbde-kube-api-access-bxwg7\") pod \"1338a990-9b63-4b6c-ad9c-40c95903dbde\" (UID: \"1338a990-9b63-4b6c-ad9c-40c95903dbde\") " Oct 02 19:49:46 crc kubenswrapper[4832]: I1002 19:49:46.846066 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1338a990-9b63-4b6c-ad9c-40c95903dbde-utilities\") pod \"1338a990-9b63-4b6c-ad9c-40c95903dbde\" (UID: \"1338a990-9b63-4b6c-ad9c-40c95903dbde\") " Oct 02 19:49:46 crc kubenswrapper[4832]: I1002 19:49:46.847845 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1338a990-9b63-4b6c-ad9c-40c95903dbde-utilities" (OuterVolumeSpecName: "utilities") pod "1338a990-9b63-4b6c-ad9c-40c95903dbde" (UID: "1338a990-9b63-4b6c-ad9c-40c95903dbde"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:49:46 crc kubenswrapper[4832]: I1002 19:49:46.853158 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1338a990-9b63-4b6c-ad9c-40c95903dbde-kube-api-access-bxwg7" (OuterVolumeSpecName: "kube-api-access-bxwg7") pod "1338a990-9b63-4b6c-ad9c-40c95903dbde" (UID: "1338a990-9b63-4b6c-ad9c-40c95903dbde"). InnerVolumeSpecName "kube-api-access-bxwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:49:46 crc kubenswrapper[4832]: I1002 19:49:46.870892 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1338a990-9b63-4b6c-ad9c-40c95903dbde-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1338a990-9b63-4b6c-ad9c-40c95903dbde" (UID: "1338a990-9b63-4b6c-ad9c-40c95903dbde"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:49:46 crc kubenswrapper[4832]: I1002 19:49:46.949192 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1338a990-9b63-4b6c-ad9c-40c95903dbde-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:49:46 crc kubenswrapper[4832]: I1002 19:49:46.949228 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxwg7\" (UniqueName: \"kubernetes.io/projected/1338a990-9b63-4b6c-ad9c-40c95903dbde-kube-api-access-bxwg7\") on node \"crc\" DevicePath \"\"" Oct 02 19:49:46 crc kubenswrapper[4832]: I1002 19:49:46.949240 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1338a990-9b63-4b6c-ad9c-40c95903dbde-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.189216 4832 generic.go:334] "Generic (PLEG): container finished" podID="1338a990-9b63-4b6c-ad9c-40c95903dbde" containerID="6ee9d2393b5f033e6f5520455d8485f9a0b4d00e0b2d16c963c794b8c1f4db9d" exitCode=0 Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.189327 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxv6q" event={"ID":"1338a990-9b63-4b6c-ad9c-40c95903dbde","Type":"ContainerDied","Data":"6ee9d2393b5f033e6f5520455d8485f9a0b4d00e0b2d16c963c794b8c1f4db9d"} Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.189375 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxv6q" event={"ID":"1338a990-9b63-4b6c-ad9c-40c95903dbde","Type":"ContainerDied","Data":"7f62f864901718f690c276485758cac4455b539b4324f1bd0a485cc3888111d7"} Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.189427 4832 scope.go:117] "RemoveContainer" containerID="6ee9d2393b5f033e6f5520455d8485f9a0b4d00e0b2d16c963c794b8c1f4db9d" Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.189813 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxv6q" Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.223376 4832 scope.go:117] "RemoveContainer" containerID="125ef6c8401d61fb50fe8c869073ff964a697b52eef5d4c841c0c8c61b02effe" Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.263968 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxv6q"] Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.276158 4832 scope.go:117] "RemoveContainer" containerID="00f85f5b56f47c9b1324609102c7bc4e87810fc89ac5e83240a6c5fc79319aca" Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.280605 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxv6q"] Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.341298 4832 scope.go:117] "RemoveContainer" containerID="6ee9d2393b5f033e6f5520455d8485f9a0b4d00e0b2d16c963c794b8c1f4db9d" Oct 02 19:49:47 crc kubenswrapper[4832]: E1002 19:49:47.341977 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee9d2393b5f033e6f5520455d8485f9a0b4d00e0b2d16c963c794b8c1f4db9d\": container with ID starting with 6ee9d2393b5f033e6f5520455d8485f9a0b4d00e0b2d16c963c794b8c1f4db9d not found: ID does not exist" containerID="6ee9d2393b5f033e6f5520455d8485f9a0b4d00e0b2d16c963c794b8c1f4db9d" Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.342110 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee9d2393b5f033e6f5520455d8485f9a0b4d00e0b2d16c963c794b8c1f4db9d"} err="failed to get container status \"6ee9d2393b5f033e6f5520455d8485f9a0b4d00e0b2d16c963c794b8c1f4db9d\": rpc error: code = NotFound desc = could not find container \"6ee9d2393b5f033e6f5520455d8485f9a0b4d00e0b2d16c963c794b8c1f4db9d\": container with ID starting with 6ee9d2393b5f033e6f5520455d8485f9a0b4d00e0b2d16c963c794b8c1f4db9d not found: ID does not exist" Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.342276 4832 scope.go:117] "RemoveContainer" containerID="125ef6c8401d61fb50fe8c869073ff964a697b52eef5d4c841c0c8c61b02effe" Oct 02 19:49:47 crc kubenswrapper[4832]: E1002 19:49:47.342732 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"125ef6c8401d61fb50fe8c869073ff964a697b52eef5d4c841c0c8c61b02effe\": container with ID starting with 125ef6c8401d61fb50fe8c869073ff964a697b52eef5d4c841c0c8c61b02effe not found: ID does not exist" containerID="125ef6c8401d61fb50fe8c869073ff964a697b52eef5d4c841c0c8c61b02effe" Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.342771 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"125ef6c8401d61fb50fe8c869073ff964a697b52eef5d4c841c0c8c61b02effe"} err="failed to get container status \"125ef6c8401d61fb50fe8c869073ff964a697b52eef5d4c841c0c8c61b02effe\": rpc error: code = NotFound desc = could not find container \"125ef6c8401d61fb50fe8c869073ff964a697b52eef5d4c841c0c8c61b02effe\": container with ID starting with 125ef6c8401d61fb50fe8c869073ff964a697b52eef5d4c841c0c8c61b02effe not found: ID does not exist" Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.342798 4832 scope.go:117] "RemoveContainer" containerID="00f85f5b56f47c9b1324609102c7bc4e87810fc89ac5e83240a6c5fc79319aca" Oct 02 19:49:47 crc kubenswrapper[4832]: E1002 19:49:47.343246 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f85f5b56f47c9b1324609102c7bc4e87810fc89ac5e83240a6c5fc79319aca\": container with ID starting with 00f85f5b56f47c9b1324609102c7bc4e87810fc89ac5e83240a6c5fc79319aca not found: ID does not exist" containerID="00f85f5b56f47c9b1324609102c7bc4e87810fc89ac5e83240a6c5fc79319aca" Oct 02 19:49:47 crc kubenswrapper[4832]: I1002 19:49:47.343390 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f85f5b56f47c9b1324609102c7bc4e87810fc89ac5e83240a6c5fc79319aca"} err="failed to get container status \"00f85f5b56f47c9b1324609102c7bc4e87810fc89ac5e83240a6c5fc79319aca\": rpc error: code = NotFound desc = could not find container \"00f85f5b56f47c9b1324609102c7bc4e87810fc89ac5e83240a6c5fc79319aca\": container with ID starting with 00f85f5b56f47c9b1324609102c7bc4e87810fc89ac5e83240a6c5fc79319aca not found: ID does not exist" Oct 02 19:49:49 crc kubenswrapper[4832]: I1002 19:49:49.250155 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1338a990-9b63-4b6c-ad9c-40c95903dbde" path="/var/lib/kubelet/pods/1338a990-9b63-4b6c-ad9c-40c95903dbde/volumes" Oct 02 19:49:50 crc kubenswrapper[4832]: I1002 19:49:50.224413 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:49:50 crc kubenswrapper[4832]: E1002 19:49:50.225338 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.002164 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xf2rk"] Oct 02 19:49:54 crc kubenswrapper[4832]: E1002 19:49:54.005392 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14048d5e-0eb6-4025-a02e-41cd3bcef786" containerName="extract-content" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.005495 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="14048d5e-0eb6-4025-a02e-41cd3bcef786" containerName="extract-content" Oct 02 19:49:54 crc kubenswrapper[4832]: E1002 19:49:54.005609 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14048d5e-0eb6-4025-a02e-41cd3bcef786" containerName="registry-server" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.005675 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="14048d5e-0eb6-4025-a02e-41cd3bcef786" containerName="registry-server" Oct 02 19:49:54 crc kubenswrapper[4832]: E1002 19:49:54.005871 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1338a990-9b63-4b6c-ad9c-40c95903dbde" containerName="extract-utilities" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.005960 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1338a990-9b63-4b6c-ad9c-40c95903dbde" containerName="extract-utilities" Oct 02 19:49:54 crc kubenswrapper[4832]: E1002 19:49:54.006039 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14048d5e-0eb6-4025-a02e-41cd3bcef786" containerName="extract-utilities" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.006114 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="14048d5e-0eb6-4025-a02e-41cd3bcef786" containerName="extract-utilities" Oct 02 19:49:54 crc kubenswrapper[4832]: E1002 19:49:54.006188 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1338a990-9b63-4b6c-ad9c-40c95903dbde" containerName="extract-content" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.006256 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1338a990-9b63-4b6c-ad9c-40c95903dbde" containerName="extract-content" Oct 02 19:49:54 crc kubenswrapper[4832]: E1002 19:49:54.006371 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1338a990-9b63-4b6c-ad9c-40c95903dbde" containerName="registry-server" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.006443 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="1338a990-9b63-4b6c-ad9c-40c95903dbde" containerName="registry-server" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.006827 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="14048d5e-0eb6-4025-a02e-41cd3bcef786" containerName="registry-server" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.006989 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="1338a990-9b63-4b6c-ad9c-40c95903dbde" containerName="registry-server" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.009655 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.022814 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xf2rk"] Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.143863 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p564l\" (UniqueName: \"kubernetes.io/projected/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-kube-api-access-p564l\") pod \"community-operators-xf2rk\" (UID: \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\") " pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.143941 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-catalog-content\") pod \"community-operators-xf2rk\" (UID: \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\") " pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.143973 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-utilities\") pod \"community-operators-xf2rk\" (UID: \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\") " pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.246803 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p564l\" (UniqueName: \"kubernetes.io/projected/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-kube-api-access-p564l\") pod \"community-operators-xf2rk\" (UID: \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\") " pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.246886 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-catalog-content\") pod \"community-operators-xf2rk\" (UID: \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\") " pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.246917 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-utilities\") pod \"community-operators-xf2rk\" (UID: \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\") " pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.247390 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-catalog-content\") pod \"community-operators-xf2rk\" (UID: \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\") " pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.247470 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-utilities\") pod \"community-operators-xf2rk\" (UID: \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\") " pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.268730 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p564l\" (UniqueName: \"kubernetes.io/projected/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-kube-api-access-p564l\") pod \"community-operators-xf2rk\" (UID: \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\") " pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.338734 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:49:54 crc kubenswrapper[4832]: I1002 19:49:54.904779 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xf2rk"] Oct 02 19:49:55 crc kubenswrapper[4832]: I1002 19:49:55.292742 4832 generic.go:334] "Generic (PLEG): container finished" podID="0eaeecd9-ce9b-428e-a04e-5dfa938d8489" containerID="48e1c38563e88812e293e21a59f103e03b5ba9c0461c19b3de160bea6494b18e" exitCode=0 Oct 02 19:49:55 crc kubenswrapper[4832]: I1002 19:49:55.292809 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xf2rk" event={"ID":"0eaeecd9-ce9b-428e-a04e-5dfa938d8489","Type":"ContainerDied","Data":"48e1c38563e88812e293e21a59f103e03b5ba9c0461c19b3de160bea6494b18e"} Oct 02 19:49:55 crc kubenswrapper[4832]: I1002 19:49:55.293049 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xf2rk" event={"ID":"0eaeecd9-ce9b-428e-a04e-5dfa938d8489","Type":"ContainerStarted","Data":"af55c7d75e67a16267bf04c860fc2b3308afabc49104eb246a72fff2d92896ba"} Oct 02 19:49:57 crc kubenswrapper[4832]: I1002 19:49:57.321882 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xf2rk" event={"ID":"0eaeecd9-ce9b-428e-a04e-5dfa938d8489","Type":"ContainerStarted","Data":"3013b8e7c7e4fdfa59491efb397fcad9413206a280dcdd2fa5c9b5bf28478a82"} Oct 02 19:49:58 crc kubenswrapper[4832]: I1002 19:49:58.337549 4832 generic.go:334] "Generic (PLEG): container finished" podID="0eaeecd9-ce9b-428e-a04e-5dfa938d8489" containerID="3013b8e7c7e4fdfa59491efb397fcad9413206a280dcdd2fa5c9b5bf28478a82" exitCode=0 Oct 02 19:49:58 crc kubenswrapper[4832]: I1002 19:49:58.337610 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xf2rk" event={"ID":"0eaeecd9-ce9b-428e-a04e-5dfa938d8489","Type":"ContainerDied","Data":"3013b8e7c7e4fdfa59491efb397fcad9413206a280dcdd2fa5c9b5bf28478a82"} Oct 02 19:49:59 crc kubenswrapper[4832]: I1002 19:49:59.355638 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xf2rk" event={"ID":"0eaeecd9-ce9b-428e-a04e-5dfa938d8489","Type":"ContainerStarted","Data":"601a71ca1637a310008cf4a0679337ea83d05e8c155c83e95f9c34167c633118"} Oct 02 19:49:59 crc kubenswrapper[4832]: I1002 19:49:59.384963 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xf2rk" podStartSLOduration=2.653274583 podStartE2EDuration="6.384937087s" podCreationTimestamp="2025-10-02 19:49:53 +0000 UTC" firstStartedPulling="2025-10-02 19:49:55.295101996 +0000 UTC m=+5352.264544878" lastFinishedPulling="2025-10-02 19:49:59.02676451 +0000 UTC m=+5355.996207382" observedRunningTime="2025-10-02 19:49:59.377325402 +0000 UTC m=+5356.346768284" watchObservedRunningTime="2025-10-02 19:49:59.384937087 +0000 UTC m=+5356.354379969" Oct 02 19:50:04 crc kubenswrapper[4832]: I1002 19:50:04.223410 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:50:04 crc kubenswrapper[4832]: E1002 19:50:04.225102 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:50:04 crc kubenswrapper[4832]: I1002 19:50:04.339639 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:50:04 crc kubenswrapper[4832]: I1002 19:50:04.341758 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:50:04 crc kubenswrapper[4832]: I1002 19:50:04.396727 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:50:04 crc kubenswrapper[4832]: I1002 19:50:04.463795 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:50:04 crc kubenswrapper[4832]: I1002 19:50:04.647419 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xf2rk"] Oct 02 19:50:06 crc kubenswrapper[4832]: I1002 19:50:06.444092 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xf2rk" podUID="0eaeecd9-ce9b-428e-a04e-5dfa938d8489" containerName="registry-server" containerID="cri-o://601a71ca1637a310008cf4a0679337ea83d05e8c155c83e95f9c34167c633118" gracePeriod=2 Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.015115 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.094331 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-utilities\") pod \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\" (UID: \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\") " Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.094483 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-catalog-content\") pod \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\" (UID: \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\") " Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.094584 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p564l\" (UniqueName: \"kubernetes.io/projected/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-kube-api-access-p564l\") pod \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\" (UID: \"0eaeecd9-ce9b-428e-a04e-5dfa938d8489\") " Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.098166 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-utilities" (OuterVolumeSpecName: "utilities") pod "0eaeecd9-ce9b-428e-a04e-5dfa938d8489" (UID: "0eaeecd9-ce9b-428e-a04e-5dfa938d8489"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.104063 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-kube-api-access-p564l" (OuterVolumeSpecName: "kube-api-access-p564l") pod "0eaeecd9-ce9b-428e-a04e-5dfa938d8489" (UID: "0eaeecd9-ce9b-428e-a04e-5dfa938d8489"). InnerVolumeSpecName "kube-api-access-p564l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.197925 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p564l\" (UniqueName: \"kubernetes.io/projected/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-kube-api-access-p564l\") on node \"crc\" DevicePath \"\"" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.197967 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.458550 4832 generic.go:334] "Generic (PLEG): container finished" podID="0eaeecd9-ce9b-428e-a04e-5dfa938d8489" containerID="601a71ca1637a310008cf4a0679337ea83d05e8c155c83e95f9c34167c633118" exitCode=0 Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.458631 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xf2rk" event={"ID":"0eaeecd9-ce9b-428e-a04e-5dfa938d8489","Type":"ContainerDied","Data":"601a71ca1637a310008cf4a0679337ea83d05e8c155c83e95f9c34167c633118"} Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.458674 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xf2rk" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.458878 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xf2rk" event={"ID":"0eaeecd9-ce9b-428e-a04e-5dfa938d8489","Type":"ContainerDied","Data":"af55c7d75e67a16267bf04c860fc2b3308afabc49104eb246a72fff2d92896ba"} Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.458894 4832 scope.go:117] "RemoveContainer" containerID="601a71ca1637a310008cf4a0679337ea83d05e8c155c83e95f9c34167c633118" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.490587 4832 scope.go:117] "RemoveContainer" containerID="3013b8e7c7e4fdfa59491efb397fcad9413206a280dcdd2fa5c9b5bf28478a82" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.527718 4832 scope.go:117] "RemoveContainer" containerID="48e1c38563e88812e293e21a59f103e03b5ba9c0461c19b3de160bea6494b18e" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.573955 4832 scope.go:117] "RemoveContainer" containerID="601a71ca1637a310008cf4a0679337ea83d05e8c155c83e95f9c34167c633118" Oct 02 19:50:07 crc kubenswrapper[4832]: E1002 19:50:07.574465 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"601a71ca1637a310008cf4a0679337ea83d05e8c155c83e95f9c34167c633118\": container with ID starting with 601a71ca1637a310008cf4a0679337ea83d05e8c155c83e95f9c34167c633118 not found: ID does not exist" containerID="601a71ca1637a310008cf4a0679337ea83d05e8c155c83e95f9c34167c633118" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.574517 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"601a71ca1637a310008cf4a0679337ea83d05e8c155c83e95f9c34167c633118"} err="failed to get container status \"601a71ca1637a310008cf4a0679337ea83d05e8c155c83e95f9c34167c633118\": rpc error: code = NotFound desc = could not find container \"601a71ca1637a310008cf4a0679337ea83d05e8c155c83e95f9c34167c633118\": container with ID starting with 601a71ca1637a310008cf4a0679337ea83d05e8c155c83e95f9c34167c633118 not found: ID does not exist" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.574543 4832 scope.go:117] "RemoveContainer" containerID="3013b8e7c7e4fdfa59491efb397fcad9413206a280dcdd2fa5c9b5bf28478a82" Oct 02 19:50:07 crc kubenswrapper[4832]: E1002 19:50:07.575114 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3013b8e7c7e4fdfa59491efb397fcad9413206a280dcdd2fa5c9b5bf28478a82\": container with ID starting with 3013b8e7c7e4fdfa59491efb397fcad9413206a280dcdd2fa5c9b5bf28478a82 not found: ID does not exist" containerID="3013b8e7c7e4fdfa59491efb397fcad9413206a280dcdd2fa5c9b5bf28478a82" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.575156 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3013b8e7c7e4fdfa59491efb397fcad9413206a280dcdd2fa5c9b5bf28478a82"} err="failed to get container status \"3013b8e7c7e4fdfa59491efb397fcad9413206a280dcdd2fa5c9b5bf28478a82\": rpc error: code = NotFound desc = could not find container \"3013b8e7c7e4fdfa59491efb397fcad9413206a280dcdd2fa5c9b5bf28478a82\": container with ID starting with 3013b8e7c7e4fdfa59491efb397fcad9413206a280dcdd2fa5c9b5bf28478a82 not found: ID does not exist" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.575192 4832 scope.go:117] "RemoveContainer" containerID="48e1c38563e88812e293e21a59f103e03b5ba9c0461c19b3de160bea6494b18e" Oct 02 19:50:07 crc kubenswrapper[4832]: E1002 19:50:07.575769 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48e1c38563e88812e293e21a59f103e03b5ba9c0461c19b3de160bea6494b18e\": container with ID starting with 48e1c38563e88812e293e21a59f103e03b5ba9c0461c19b3de160bea6494b18e not found: ID does not exist" containerID="48e1c38563e88812e293e21a59f103e03b5ba9c0461c19b3de160bea6494b18e" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.575822 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48e1c38563e88812e293e21a59f103e03b5ba9c0461c19b3de160bea6494b18e"} err="failed to get container status \"48e1c38563e88812e293e21a59f103e03b5ba9c0461c19b3de160bea6494b18e\": rpc error: code = NotFound desc = could not find container \"48e1c38563e88812e293e21a59f103e03b5ba9c0461c19b3de160bea6494b18e\": container with ID starting with 48e1c38563e88812e293e21a59f103e03b5ba9c0461c19b3de160bea6494b18e not found: ID does not exist" Oct 02 19:50:07 crc kubenswrapper[4832]: I1002 19:50:07.980719 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0eaeecd9-ce9b-428e-a04e-5dfa938d8489" (UID: "0eaeecd9-ce9b-428e-a04e-5dfa938d8489"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:50:08 crc kubenswrapper[4832]: I1002 19:50:08.025404 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eaeecd9-ce9b-428e-a04e-5dfa938d8489-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:50:08 crc kubenswrapper[4832]: I1002 19:50:08.147708 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xf2rk"] Oct 02 19:50:08 crc kubenswrapper[4832]: I1002 19:50:08.157683 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xf2rk"] Oct 02 19:50:09 crc kubenswrapper[4832]: I1002 19:50:09.237328 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eaeecd9-ce9b-428e-a04e-5dfa938d8489" path="/var/lib/kubelet/pods/0eaeecd9-ce9b-428e-a04e-5dfa938d8489/volumes" Oct 02 19:50:16 crc kubenswrapper[4832]: I1002 19:50:16.224086 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:50:16 crc kubenswrapper[4832]: E1002 19:50:16.225525 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:50:29 crc kubenswrapper[4832]: I1002 19:50:29.223255 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:50:29 crc kubenswrapper[4832]: E1002 19:50:29.224062 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:50:43 crc kubenswrapper[4832]: I1002 19:50:43.223404 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:50:43 crc kubenswrapper[4832]: E1002 19:50:43.224239 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:50:57 crc kubenswrapper[4832]: I1002 19:50:57.222808 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:50:57 crc kubenswrapper[4832]: E1002 19:50:57.223606 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:51:12 crc kubenswrapper[4832]: I1002 19:51:12.224561 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:51:12 crc kubenswrapper[4832]: E1002 19:51:12.225767 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:51:23 crc kubenswrapper[4832]: I1002 19:51:23.223562 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:51:23 crc kubenswrapper[4832]: E1002 19:51:23.225008 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:51:35 crc kubenswrapper[4832]: I1002 19:51:35.235046 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:51:35 crc kubenswrapper[4832]: E1002 19:51:35.235859 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:51:48 crc kubenswrapper[4832]: I1002 19:51:48.223069 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:51:48 crc kubenswrapper[4832]: E1002 19:51:48.223881 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:52:00 crc kubenswrapper[4832]: I1002 19:52:00.224773 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:52:00 crc kubenswrapper[4832]: E1002 19:52:00.226471 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:52:11 crc kubenswrapper[4832]: I1002 19:52:11.223092 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:52:11 crc kubenswrapper[4832]: E1002 19:52:11.224470 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:52:24 crc kubenswrapper[4832]: I1002 19:52:24.222945 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:52:24 crc kubenswrapper[4832]: E1002 19:52:24.223776 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:52:37 crc kubenswrapper[4832]: I1002 19:52:37.223218 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:52:37 crc kubenswrapper[4832]: E1002 19:52:37.224147 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:52:49 crc kubenswrapper[4832]: I1002 19:52:49.229867 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:52:49 crc kubenswrapper[4832]: E1002 19:52:49.230798 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:53:01 crc kubenswrapper[4832]: I1002 19:53:01.222735 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:53:01 crc kubenswrapper[4832]: E1002 19:53:01.223581 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:53:15 crc kubenswrapper[4832]: I1002 19:53:15.232434 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:53:15 crc kubenswrapper[4832]: E1002 19:53:15.233306 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:53:29 crc kubenswrapper[4832]: I1002 19:53:29.224577 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:53:29 crc kubenswrapper[4832]: E1002 19:53:29.225414 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:53:42 crc kubenswrapper[4832]: I1002 19:53:42.223291 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:53:42 crc kubenswrapper[4832]: E1002 19:53:42.224223 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 19:53:57 crc kubenswrapper[4832]: I1002 19:53:57.223128 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:53:58 crc kubenswrapper[4832]: I1002 19:53:58.311836 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"879114e00327d2a95c39bc82519c53c03988b64ac14d0a277e3ce03b5d041d68"} Oct 02 19:56:00 crc kubenswrapper[4832]: I1002 19:56:00.779820 4832 trace.go:236] Trace[584296455]: "Calculate volume metrics of storage for pod minio-dev/minio" (02-Oct-2025 19:55:59.673) (total time: 1105ms): Oct 02 19:56:00 crc kubenswrapper[4832]: Trace[584296455]: [1.105829717s] [1.105829717s] END Oct 02 19:56:26 crc kubenswrapper[4832]: I1002 19:56:26.875970 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:56:26 crc kubenswrapper[4832]: I1002 19:56:26.876604 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:56:30 crc kubenswrapper[4832]: I1002 19:56:30.145955 4832 generic.go:334] "Generic (PLEG): container finished" podID="040c96d0-9636-499a-9986-fb79a73e7b2d" containerID="1753d1db5ce6a3d635d6ce8c5d6f0839b1cc7b18732db5308835b482f6a812ef" exitCode=0 Oct 02 19:56:30 crc kubenswrapper[4832]: I1002 19:56:30.146014 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"040c96d0-9636-499a-9986-fb79a73e7b2d","Type":"ContainerDied","Data":"1753d1db5ce6a3d635d6ce8c5d6f0839b1cc7b18732db5308835b482f6a812ef"} Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.668736 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.676050 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/040c96d0-9636-499a-9986-fb79a73e7b2d-test-operator-ephemeral-workdir\") pod \"040c96d0-9636-499a-9986-fb79a73e7b2d\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.676099 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-openstack-config-secret\") pod \"040c96d0-9636-499a-9986-fb79a73e7b2d\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.676133 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-ssh-key\") pod \"040c96d0-9636-499a-9986-fb79a73e7b2d\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.676191 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/040c96d0-9636-499a-9986-fb79a73e7b2d-test-operator-ephemeral-temporary\") pod \"040c96d0-9636-499a-9986-fb79a73e7b2d\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.676231 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"040c96d0-9636-499a-9986-fb79a73e7b2d\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.676288 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-ca-certs\") pod \"040c96d0-9636-499a-9986-fb79a73e7b2d\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.676320 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/040c96d0-9636-499a-9986-fb79a73e7b2d-openstack-config\") pod \"040c96d0-9636-499a-9986-fb79a73e7b2d\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.676339 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99f2k\" (UniqueName: \"kubernetes.io/projected/040c96d0-9636-499a-9986-fb79a73e7b2d-kube-api-access-99f2k\") pod \"040c96d0-9636-499a-9986-fb79a73e7b2d\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.676425 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/040c96d0-9636-499a-9986-fb79a73e7b2d-config-data\") pod \"040c96d0-9636-499a-9986-fb79a73e7b2d\" (UID: \"040c96d0-9636-499a-9986-fb79a73e7b2d\") " Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.678562 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/040c96d0-9636-499a-9986-fb79a73e7b2d-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "040c96d0-9636-499a-9986-fb79a73e7b2d" (UID: "040c96d0-9636-499a-9986-fb79a73e7b2d"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.679177 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/040c96d0-9636-499a-9986-fb79a73e7b2d-config-data" (OuterVolumeSpecName: "config-data") pod "040c96d0-9636-499a-9986-fb79a73e7b2d" (UID: "040c96d0-9636-499a-9986-fb79a73e7b2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.684552 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/040c96d0-9636-499a-9986-fb79a73e7b2d-kube-api-access-99f2k" (OuterVolumeSpecName: "kube-api-access-99f2k") pod "040c96d0-9636-499a-9986-fb79a73e7b2d" (UID: "040c96d0-9636-499a-9986-fb79a73e7b2d"). InnerVolumeSpecName "kube-api-access-99f2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.686458 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "040c96d0-9636-499a-9986-fb79a73e7b2d" (UID: "040c96d0-9636-499a-9986-fb79a73e7b2d"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.689350 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/040c96d0-9636-499a-9986-fb79a73e7b2d-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "040c96d0-9636-499a-9986-fb79a73e7b2d" (UID: "040c96d0-9636-499a-9986-fb79a73e7b2d"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.732538 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "040c96d0-9636-499a-9986-fb79a73e7b2d" (UID: "040c96d0-9636-499a-9986-fb79a73e7b2d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.738511 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "040c96d0-9636-499a-9986-fb79a73e7b2d" (UID: "040c96d0-9636-499a-9986-fb79a73e7b2d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.747413 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/040c96d0-9636-499a-9986-fb79a73e7b2d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "040c96d0-9636-499a-9986-fb79a73e7b2d" (UID: "040c96d0-9636-499a-9986-fb79a73e7b2d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.751744 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "040c96d0-9636-499a-9986-fb79a73e7b2d" (UID: "040c96d0-9636-499a-9986-fb79a73e7b2d"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.778488 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/040c96d0-9636-499a-9986-fb79a73e7b2d-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.778555 4832 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/040c96d0-9636-499a-9986-fb79a73e7b2d-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.778567 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.778577 4832 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.778586 4832 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/040c96d0-9636-499a-9986-fb79a73e7b2d-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.780957 4832 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.780979 4832 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/040c96d0-9636-499a-9986-fb79a73e7b2d-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.780989 4832 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/040c96d0-9636-499a-9986-fb79a73e7b2d-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.780998 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99f2k\" (UniqueName: \"kubernetes.io/projected/040c96d0-9636-499a-9986-fb79a73e7b2d-kube-api-access-99f2k\") on node \"crc\" DevicePath \"\"" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.809795 4832 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 02 19:56:31 crc kubenswrapper[4832]: I1002 19:56:31.883682 4832 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 02 19:56:32 crc kubenswrapper[4832]: I1002 19:56:32.169167 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"040c96d0-9636-499a-9986-fb79a73e7b2d","Type":"ContainerDied","Data":"91b6af882844a0b3f9893047e3a690125a378880fcc5e4eda614936c5b21b19d"} Oct 02 19:56:32 crc kubenswrapper[4832]: I1002 19:56:32.169215 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91b6af882844a0b3f9893047e3a690125a378880fcc5e4eda614936c5b21b19d" Oct 02 19:56:32 crc kubenswrapper[4832]: I1002 19:56:32.169214 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.518374 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 02 19:56:39 crc kubenswrapper[4832]: E1002 19:56:39.519402 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eaeecd9-ce9b-428e-a04e-5dfa938d8489" containerName="extract-utilities" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.519420 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eaeecd9-ce9b-428e-a04e-5dfa938d8489" containerName="extract-utilities" Oct 02 19:56:39 crc kubenswrapper[4832]: E1002 19:56:39.519458 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eaeecd9-ce9b-428e-a04e-5dfa938d8489" containerName="extract-content" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.519466 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eaeecd9-ce9b-428e-a04e-5dfa938d8489" containerName="extract-content" Oct 02 19:56:39 crc kubenswrapper[4832]: E1002 19:56:39.519506 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040c96d0-9636-499a-9986-fb79a73e7b2d" containerName="tempest-tests-tempest-tests-runner" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.519515 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="040c96d0-9636-499a-9986-fb79a73e7b2d" containerName="tempest-tests-tempest-tests-runner" Oct 02 19:56:39 crc kubenswrapper[4832]: E1002 19:56:39.519526 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eaeecd9-ce9b-428e-a04e-5dfa938d8489" containerName="registry-server" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.519533 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eaeecd9-ce9b-428e-a04e-5dfa938d8489" containerName="registry-server" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.519866 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="040c96d0-9636-499a-9986-fb79a73e7b2d" containerName="tempest-tests-tempest-tests-runner" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.519893 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eaeecd9-ce9b-428e-a04e-5dfa938d8489" containerName="registry-server" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.522130 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.530429 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-2rdcd" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.533405 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.599731 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"29704afa-00e2-4921-92a4-9fe6f0d9e6e5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.600762 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqvs5\" (UniqueName: \"kubernetes.io/projected/29704afa-00e2-4921-92a4-9fe6f0d9e6e5-kube-api-access-cqvs5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"29704afa-00e2-4921-92a4-9fe6f0d9e6e5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.702457 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqvs5\" (UniqueName: \"kubernetes.io/projected/29704afa-00e2-4921-92a4-9fe6f0d9e6e5-kube-api-access-cqvs5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"29704afa-00e2-4921-92a4-9fe6f0d9e6e5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.702602 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"29704afa-00e2-4921-92a4-9fe6f0d9e6e5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.705075 4832 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"29704afa-00e2-4921-92a4-9fe6f0d9e6e5\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.752689 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqvs5\" (UniqueName: \"kubernetes.io/projected/29704afa-00e2-4921-92a4-9fe6f0d9e6e5-kube-api-access-cqvs5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"29704afa-00e2-4921-92a4-9fe6f0d9e6e5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.757236 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"29704afa-00e2-4921-92a4-9fe6f0d9e6e5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 19:56:39 crc kubenswrapper[4832]: I1002 19:56:39.847465 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 02 19:56:40 crc kubenswrapper[4832]: I1002 19:56:40.376747 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 02 19:56:40 crc kubenswrapper[4832]: W1002 19:56:40.380870 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29704afa_00e2_4921_92a4_9fe6f0d9e6e5.slice/crio-2bcc084230ad023f8cce2a4f14de40d81346ca0b375ff1c2485e307701f2dac3 WatchSource:0}: Error finding container 2bcc084230ad023f8cce2a4f14de40d81346ca0b375ff1c2485e307701f2dac3: Status 404 returned error can't find the container with id 2bcc084230ad023f8cce2a4f14de40d81346ca0b375ff1c2485e307701f2dac3 Oct 02 19:56:40 crc kubenswrapper[4832]: I1002 19:56:40.386131 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 19:56:41 crc kubenswrapper[4832]: I1002 19:56:41.288909 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"29704afa-00e2-4921-92a4-9fe6f0d9e6e5","Type":"ContainerStarted","Data":"2bcc084230ad023f8cce2a4f14de40d81346ca0b375ff1c2485e307701f2dac3"} Oct 02 19:56:42 crc kubenswrapper[4832]: I1002 19:56:42.304506 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"29704afa-00e2-4921-92a4-9fe6f0d9e6e5","Type":"ContainerStarted","Data":"7a6f8bd9c3813ee145a25290cf1e81ff97923acc30dbebec235eac797e753ada"} Oct 02 19:56:42 crc kubenswrapper[4832]: I1002 19:56:42.321130 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.764533985 podStartE2EDuration="3.321109082s" podCreationTimestamp="2025-10-02 19:56:39 +0000 UTC" firstStartedPulling="2025-10-02 19:56:40.385901898 +0000 UTC m=+5757.355344770" lastFinishedPulling="2025-10-02 19:56:41.942476955 +0000 UTC m=+5758.911919867" observedRunningTime="2025-10-02 19:56:42.319182553 +0000 UTC m=+5759.288625465" watchObservedRunningTime="2025-10-02 19:56:42.321109082 +0000 UTC m=+5759.290551954" Oct 02 19:56:54 crc kubenswrapper[4832]: I1002 19:56:54.001403 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5f6f67fd59-pbxsj" podUID="cffb41da-c1fe-465d-8ddc-9df65cc50a51" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Oct 02 19:56:56 crc kubenswrapper[4832]: I1002 19:56:56.875198 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:56:56 crc kubenswrapper[4832]: I1002 19:56:56.875609 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:57:07 crc kubenswrapper[4832]: I1002 19:57:07.122046 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6gqdd/must-gather-rdkxb"] Oct 02 19:57:07 crc kubenswrapper[4832]: I1002 19:57:07.125363 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/must-gather-rdkxb" Oct 02 19:57:07 crc kubenswrapper[4832]: I1002 19:57:07.130105 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6gqdd"/"kube-root-ca.crt" Oct 02 19:57:07 crc kubenswrapper[4832]: I1002 19:57:07.131155 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6gqdd"/"openshift-service-ca.crt" Oct 02 19:57:07 crc kubenswrapper[4832]: I1002 19:57:07.136346 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6gqdd"/"default-dockercfg-9zltt" Oct 02 19:57:07 crc kubenswrapper[4832]: I1002 19:57:07.145471 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6gqdd/must-gather-rdkxb"] Oct 02 19:57:07 crc kubenswrapper[4832]: I1002 19:57:07.235370 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5vmk\" (UniqueName: \"kubernetes.io/projected/baf4098c-0d95-47f2-83fb-1ff3a4124869-kube-api-access-v5vmk\") pod \"must-gather-rdkxb\" (UID: \"baf4098c-0d95-47f2-83fb-1ff3a4124869\") " pod="openshift-must-gather-6gqdd/must-gather-rdkxb" Oct 02 19:57:07 crc kubenswrapper[4832]: I1002 19:57:07.235489 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/baf4098c-0d95-47f2-83fb-1ff3a4124869-must-gather-output\") pod \"must-gather-rdkxb\" (UID: \"baf4098c-0d95-47f2-83fb-1ff3a4124869\") " pod="openshift-must-gather-6gqdd/must-gather-rdkxb" Oct 02 19:57:07 crc kubenswrapper[4832]: I1002 19:57:07.337425 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5vmk\" (UniqueName: \"kubernetes.io/projected/baf4098c-0d95-47f2-83fb-1ff3a4124869-kube-api-access-v5vmk\") pod \"must-gather-rdkxb\" (UID: \"baf4098c-0d95-47f2-83fb-1ff3a4124869\") " pod="openshift-must-gather-6gqdd/must-gather-rdkxb" Oct 02 19:57:07 crc kubenswrapper[4832]: I1002 19:57:07.337506 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/baf4098c-0d95-47f2-83fb-1ff3a4124869-must-gather-output\") pod \"must-gather-rdkxb\" (UID: \"baf4098c-0d95-47f2-83fb-1ff3a4124869\") " pod="openshift-must-gather-6gqdd/must-gather-rdkxb" Oct 02 19:57:07 crc kubenswrapper[4832]: I1002 19:57:07.337930 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/baf4098c-0d95-47f2-83fb-1ff3a4124869-must-gather-output\") pod \"must-gather-rdkxb\" (UID: \"baf4098c-0d95-47f2-83fb-1ff3a4124869\") " pod="openshift-must-gather-6gqdd/must-gather-rdkxb" Oct 02 19:57:07 crc kubenswrapper[4832]: I1002 19:57:07.370934 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5vmk\" (UniqueName: \"kubernetes.io/projected/baf4098c-0d95-47f2-83fb-1ff3a4124869-kube-api-access-v5vmk\") pod \"must-gather-rdkxb\" (UID: \"baf4098c-0d95-47f2-83fb-1ff3a4124869\") " pod="openshift-must-gather-6gqdd/must-gather-rdkxb" Oct 02 19:57:07 crc kubenswrapper[4832]: I1002 19:57:07.447558 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/must-gather-rdkxb" Oct 02 19:57:07 crc kubenswrapper[4832]: I1002 19:57:07.974860 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6gqdd/must-gather-rdkxb"] Oct 02 19:57:08 crc kubenswrapper[4832]: I1002 19:57:08.642013 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6gqdd/must-gather-rdkxb" event={"ID":"baf4098c-0d95-47f2-83fb-1ff3a4124869","Type":"ContainerStarted","Data":"04e314891f33a9d33dd78feb1afeb3d9d5f4419bf0e7a99a19bc9f5cf257d2f0"} Oct 02 19:57:13 crc kubenswrapper[4832]: I1002 19:57:13.712184 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6gqdd/must-gather-rdkxb" event={"ID":"baf4098c-0d95-47f2-83fb-1ff3a4124869","Type":"ContainerStarted","Data":"21e557d4c16f4b101ab3384abe079f5e83c30da758a8f8a82764ce4e369b6bc8"} Oct 02 19:57:13 crc kubenswrapper[4832]: I1002 19:57:13.712778 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6gqdd/must-gather-rdkxb" event={"ID":"baf4098c-0d95-47f2-83fb-1ff3a4124869","Type":"ContainerStarted","Data":"d719bd1ab1fdc5208be35aaee504e0d07dcec6c10d28217035821fabbd8d8889"} Oct 02 19:57:13 crc kubenswrapper[4832]: I1002 19:57:13.744007 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6gqdd/must-gather-rdkxb" podStartSLOduration=2.548751873 podStartE2EDuration="6.743982481s" podCreationTimestamp="2025-10-02 19:57:07 +0000 UTC" firstStartedPulling="2025-10-02 19:57:07.984233931 +0000 UTC m=+5784.953676803" lastFinishedPulling="2025-10-02 19:57:12.179464539 +0000 UTC m=+5789.148907411" observedRunningTime="2025-10-02 19:57:13.733305042 +0000 UTC m=+5790.702747914" watchObservedRunningTime="2025-10-02 19:57:13.743982481 +0000 UTC m=+5790.713425373" Oct 02 19:57:18 crc kubenswrapper[4832]: I1002 19:57:18.756992 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6gqdd/crc-debug-bzhtk"] Oct 02 19:57:18 crc kubenswrapper[4832]: I1002 19:57:18.761473 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" Oct 02 19:57:18 crc kubenswrapper[4832]: I1002 19:57:18.870913 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1a9ba76-0d0b-4504-a9da-995b5df88d5c-host\") pod \"crc-debug-bzhtk\" (UID: \"e1a9ba76-0d0b-4504-a9da-995b5df88d5c\") " pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" Oct 02 19:57:18 crc kubenswrapper[4832]: I1002 19:57:18.871366 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhrxk\" (UniqueName: \"kubernetes.io/projected/e1a9ba76-0d0b-4504-a9da-995b5df88d5c-kube-api-access-vhrxk\") pod \"crc-debug-bzhtk\" (UID: \"e1a9ba76-0d0b-4504-a9da-995b5df88d5c\") " pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" Oct 02 19:57:18 crc kubenswrapper[4832]: I1002 19:57:18.973934 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1a9ba76-0d0b-4504-a9da-995b5df88d5c-host\") pod \"crc-debug-bzhtk\" (UID: \"e1a9ba76-0d0b-4504-a9da-995b5df88d5c\") " pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" Oct 02 19:57:18 crc kubenswrapper[4832]: I1002 19:57:18.974083 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhrxk\" (UniqueName: \"kubernetes.io/projected/e1a9ba76-0d0b-4504-a9da-995b5df88d5c-kube-api-access-vhrxk\") pod \"crc-debug-bzhtk\" (UID: \"e1a9ba76-0d0b-4504-a9da-995b5df88d5c\") " pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" Oct 02 19:57:18 crc kubenswrapper[4832]: I1002 19:57:18.975216 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1a9ba76-0d0b-4504-a9da-995b5df88d5c-host\") pod \"crc-debug-bzhtk\" (UID: \"e1a9ba76-0d0b-4504-a9da-995b5df88d5c\") " pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.004879 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhrxk\" (UniqueName: \"kubernetes.io/projected/e1a9ba76-0d0b-4504-a9da-995b5df88d5c-kube-api-access-vhrxk\") pod \"crc-debug-bzhtk\" (UID: \"e1a9ba76-0d0b-4504-a9da-995b5df88d5c\") " pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.086254 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.595344 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pqmwp"] Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.603757 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.619169 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pqmwp"] Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.695519 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svnfr\" (UniqueName: \"kubernetes.io/projected/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-kube-api-access-svnfr\") pod \"redhat-operators-pqmwp\" (UID: \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\") " pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.695879 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-utilities\") pod \"redhat-operators-pqmwp\" (UID: \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\") " pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.695966 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-catalog-content\") pod \"redhat-operators-pqmwp\" (UID: \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\") " pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.793427 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" event={"ID":"e1a9ba76-0d0b-4504-a9da-995b5df88d5c","Type":"ContainerStarted","Data":"cbe4ed314a889f2968de3b26dde7b83b456e2bf1b66462df425fce93855d2a87"} Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.801190 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svnfr\" (UniqueName: \"kubernetes.io/projected/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-kube-api-access-svnfr\") pod \"redhat-operators-pqmwp\" (UID: \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\") " pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.801424 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-utilities\") pod \"redhat-operators-pqmwp\" (UID: \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\") " pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.802147 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-utilities\") pod \"redhat-operators-pqmwp\" (UID: \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\") " pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.802170 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-catalog-content\") pod \"redhat-operators-pqmwp\" (UID: \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\") " pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.801460 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-catalog-content\") pod \"redhat-operators-pqmwp\" (UID: \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\") " pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.836970 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svnfr\" (UniqueName: \"kubernetes.io/projected/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-kube-api-access-svnfr\") pod \"redhat-operators-pqmwp\" (UID: \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\") " pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:57:19 crc kubenswrapper[4832]: I1002 19:57:19.949001 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:57:20 crc kubenswrapper[4832]: I1002 19:57:20.551368 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pqmwp"] Oct 02 19:57:20 crc kubenswrapper[4832]: I1002 19:57:20.806497 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqmwp" event={"ID":"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f","Type":"ContainerStarted","Data":"becbb4895628e0ef10127c4d12d1f26d022ca151ff4c0220c2a43f26d6432c52"} Oct 02 19:57:21 crc kubenswrapper[4832]: I1002 19:57:21.822632 4832 generic.go:334] "Generic (PLEG): container finished" podID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerID="d5ee2a15b4b050c37b2fab3b979ca8c1733d041f3aaefce541b599a8c42002e9" exitCode=0 Oct 02 19:57:21 crc kubenswrapper[4832]: I1002 19:57:21.822729 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqmwp" event={"ID":"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f","Type":"ContainerDied","Data":"d5ee2a15b4b050c37b2fab3b979ca8c1733d041f3aaefce541b599a8c42002e9"} Oct 02 19:57:23 crc kubenswrapper[4832]: I1002 19:57:23.851058 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqmwp" event={"ID":"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f","Type":"ContainerStarted","Data":"519b4f4c93b881e9ee688ce95a548cb9e561f9dc60af5c3bfd7d5e2ce60eb0a9"} Oct 02 19:57:26 crc kubenswrapper[4832]: I1002 19:57:26.875679 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:57:26 crc kubenswrapper[4832]: I1002 19:57:26.876183 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:57:26 crc kubenswrapper[4832]: I1002 19:57:26.876288 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 19:57:26 crc kubenswrapper[4832]: I1002 19:57:26.877609 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"879114e00327d2a95c39bc82519c53c03988b64ac14d0a277e3ce03b5d041d68"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 19:57:26 crc kubenswrapper[4832]: I1002 19:57:26.877703 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://879114e00327d2a95c39bc82519c53c03988b64ac14d0a277e3ce03b5d041d68" gracePeriod=600 Oct 02 19:57:27 crc kubenswrapper[4832]: I1002 19:57:27.915058 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="879114e00327d2a95c39bc82519c53c03988b64ac14d0a277e3ce03b5d041d68" exitCode=0 Oct 02 19:57:27 crc kubenswrapper[4832]: I1002 19:57:27.915164 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"879114e00327d2a95c39bc82519c53c03988b64ac14d0a277e3ce03b5d041d68"} Oct 02 19:57:27 crc kubenswrapper[4832]: I1002 19:57:27.915431 4832 scope.go:117] "RemoveContainer" containerID="e3048b600023e6f39ee08aa404eb126f0fe2078aef04b4d5ba4623f052c232c4" Oct 02 19:57:33 crc kubenswrapper[4832]: I1002 19:57:33.010522 4832 generic.go:334] "Generic (PLEG): container finished" podID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerID="519b4f4c93b881e9ee688ce95a548cb9e561f9dc60af5c3bfd7d5e2ce60eb0a9" exitCode=0 Oct 02 19:57:33 crc kubenswrapper[4832]: I1002 19:57:33.010622 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqmwp" event={"ID":"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f","Type":"ContainerDied","Data":"519b4f4c93b881e9ee688ce95a548cb9e561f9dc60af5c3bfd7d5e2ce60eb0a9"} Oct 02 19:57:35 crc kubenswrapper[4832]: E1002 19:57:35.387298 4832 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Oct 02 19:57:35 crc kubenswrapper[4832]: E1002 19:57:35.393310 4832 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vhrxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-bzhtk_openshift-must-gather-6gqdd(e1a9ba76-0d0b-4504-a9da-995b5df88d5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 02 19:57:35 crc kubenswrapper[4832]: E1002 19:57:35.394588 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" podUID="e1a9ba76-0d0b-4504-a9da-995b5df88d5c" Oct 02 19:57:36 crc kubenswrapper[4832]: I1002 19:57:36.067841 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32"} Oct 02 19:57:36 crc kubenswrapper[4832]: E1002 19:57:36.069008 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" podUID="e1a9ba76-0d0b-4504-a9da-995b5df88d5c" Oct 02 19:57:37 crc kubenswrapper[4832]: I1002 19:57:37.079650 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqmwp" event={"ID":"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f","Type":"ContainerStarted","Data":"e61a39f9f0d8b48ef9a5e775b70b9f9ecaa2236842d92344658c0ca1c5a2e895"} Oct 02 19:57:37 crc kubenswrapper[4832]: I1002 19:57:37.107731 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pqmwp" podStartSLOduration=4.016827231 podStartE2EDuration="18.107706082s" podCreationTimestamp="2025-10-02 19:57:19 +0000 UTC" firstStartedPulling="2025-10-02 19:57:21.824428716 +0000 UTC m=+5798.793871588" lastFinishedPulling="2025-10-02 19:57:35.915307527 +0000 UTC m=+5812.884750439" observedRunningTime="2025-10-02 19:57:37.100972484 +0000 UTC m=+5814.070415356" watchObservedRunningTime="2025-10-02 19:57:37.107706082 +0000 UTC m=+5814.077148954" Oct 02 19:57:39 crc kubenswrapper[4832]: I1002 19:57:39.949541 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:57:39 crc kubenswrapper[4832]: I1002 19:57:39.950201 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:57:40 crc kubenswrapper[4832]: I1002 19:57:40.996085 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pqmwp" podUID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerName="registry-server" probeResult="failure" output=< Oct 02 19:57:40 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 19:57:40 crc kubenswrapper[4832]: > Oct 02 19:57:49 crc kubenswrapper[4832]: I1002 19:57:49.209564 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" event={"ID":"e1a9ba76-0d0b-4504-a9da-995b5df88d5c","Type":"ContainerStarted","Data":"418e96607960afd757cb1a3f2d08700ea79f6695fc18ff4ffd089890f31cc523"} Oct 02 19:57:51 crc kubenswrapper[4832]: I1002 19:57:51.012585 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pqmwp" podUID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerName="registry-server" probeResult="failure" output=< Oct 02 19:57:51 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 19:57:51 crc kubenswrapper[4832]: > Oct 02 19:58:01 crc kubenswrapper[4832]: I1002 19:58:01.003981 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pqmwp" podUID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerName="registry-server" probeResult="failure" output=< Oct 02 19:58:01 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 19:58:01 crc kubenswrapper[4832]: > Oct 02 19:58:11 crc kubenswrapper[4832]: I1002 19:58:11.012840 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pqmwp" podUID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerName="registry-server" probeResult="failure" output=< Oct 02 19:58:11 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 19:58:11 crc kubenswrapper[4832]: > Oct 02 19:58:20 crc kubenswrapper[4832]: I1002 19:58:20.010752 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:58:20 crc kubenswrapper[4832]: I1002 19:58:20.041638 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" podStartSLOduration=32.537790909 podStartE2EDuration="1m2.041457296s" podCreationTimestamp="2025-10-02 19:57:18 +0000 UTC" firstStartedPulling="2025-10-02 19:57:19.141427827 +0000 UTC m=+5796.110870709" lastFinishedPulling="2025-10-02 19:57:48.645094224 +0000 UTC m=+5825.614537096" observedRunningTime="2025-10-02 19:57:49.235459386 +0000 UTC m=+5826.204902258" watchObservedRunningTime="2025-10-02 19:58:20.041457296 +0000 UTC m=+5857.010900168" Oct 02 19:58:20 crc kubenswrapper[4832]: I1002 19:58:20.063126 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:58:20 crc kubenswrapper[4832]: I1002 19:58:20.822418 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pqmwp"] Oct 02 19:58:21 crc kubenswrapper[4832]: I1002 19:58:21.642897 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pqmwp" podUID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerName="registry-server" containerID="cri-o://e61a39f9f0d8b48ef9a5e775b70b9f9ecaa2236842d92344658c0ca1c5a2e895" gracePeriod=2 Oct 02 19:58:22 crc kubenswrapper[4832]: I1002 19:58:22.656737 4832 generic.go:334] "Generic (PLEG): container finished" podID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerID="e61a39f9f0d8b48ef9a5e775b70b9f9ecaa2236842d92344658c0ca1c5a2e895" exitCode=0 Oct 02 19:58:22 crc kubenswrapper[4832]: I1002 19:58:22.656805 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqmwp" event={"ID":"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f","Type":"ContainerDied","Data":"e61a39f9f0d8b48ef9a5e775b70b9f9ecaa2236842d92344658c0ca1c5a2e895"} Oct 02 19:58:24 crc kubenswrapper[4832]: I1002 19:58:24.640616 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:58:24 crc kubenswrapper[4832]: I1002 19:58:24.715408 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqmwp" event={"ID":"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f","Type":"ContainerDied","Data":"becbb4895628e0ef10127c4d12d1f26d022ca151ff4c0220c2a43f26d6432c52"} Oct 02 19:58:24 crc kubenswrapper[4832]: I1002 19:58:24.715778 4832 scope.go:117] "RemoveContainer" containerID="e61a39f9f0d8b48ef9a5e775b70b9f9ecaa2236842d92344658c0ca1c5a2e895" Oct 02 19:58:24 crc kubenswrapper[4832]: I1002 19:58:24.716002 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqmwp" Oct 02 19:58:24 crc kubenswrapper[4832]: I1002 19:58:24.809489 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svnfr\" (UniqueName: \"kubernetes.io/projected/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-kube-api-access-svnfr\") pod \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\" (UID: \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\") " Oct 02 19:58:24 crc kubenswrapper[4832]: I1002 19:58:24.809573 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-utilities\") pod \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\" (UID: \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\") " Oct 02 19:58:24 crc kubenswrapper[4832]: I1002 19:58:24.809944 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-catalog-content\") pod \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\" (UID: \"df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f\") " Oct 02 19:58:24 crc kubenswrapper[4832]: I1002 19:58:24.810092 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-utilities" (OuterVolumeSpecName: "utilities") pod "df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" (UID: "df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:58:24 crc kubenswrapper[4832]: I1002 19:58:24.810701 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:58:24 crc kubenswrapper[4832]: I1002 19:58:24.833673 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-kube-api-access-svnfr" (OuterVolumeSpecName: "kube-api-access-svnfr") pod "df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" (UID: "df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f"). InnerVolumeSpecName "kube-api-access-svnfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:58:24 crc kubenswrapper[4832]: I1002 19:58:24.914336 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svnfr\" (UniqueName: \"kubernetes.io/projected/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-kube-api-access-svnfr\") on node \"crc\" DevicePath \"\"" Oct 02 19:58:24 crc kubenswrapper[4832]: I1002 19:58:24.929508 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" (UID: "df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:58:24 crc kubenswrapper[4832]: I1002 19:58:24.975157 4832 scope.go:117] "RemoveContainer" containerID="519b4f4c93b881e9ee688ce95a548cb9e561f9dc60af5c3bfd7d5e2ce60eb0a9" Oct 02 19:58:25 crc kubenswrapper[4832]: I1002 19:58:25.016947 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:58:25 crc kubenswrapper[4832]: I1002 19:58:25.031732 4832 scope.go:117] "RemoveContainer" containerID="d5ee2a15b4b050c37b2fab3b979ca8c1733d041f3aaefce541b599a8c42002e9" Oct 02 19:58:25 crc kubenswrapper[4832]: I1002 19:58:25.081642 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pqmwp"] Oct 02 19:58:25 crc kubenswrapper[4832]: I1002 19:58:25.091991 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pqmwp"] Oct 02 19:58:25 crc kubenswrapper[4832]: I1002 19:58:25.238132 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" path="/var/lib/kubelet/pods/df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f/volumes" Oct 02 19:58:58 crc kubenswrapper[4832]: I1002 19:58:58.203933 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_6bd80e3d-9654-4e34-8739-e718f4884c75/aodh-api/0.log" Oct 02 19:58:58 crc kubenswrapper[4832]: I1002 19:58:58.211246 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_6bd80e3d-9654-4e34-8739-e718f4884c75/aodh-evaluator/0.log" Oct 02 19:58:58 crc kubenswrapper[4832]: I1002 19:58:58.461974 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_6bd80e3d-9654-4e34-8739-e718f4884c75/aodh-listener/0.log" Oct 02 19:58:58 crc kubenswrapper[4832]: I1002 19:58:58.524249 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_6bd80e3d-9654-4e34-8739-e718f4884c75/aodh-notifier/0.log" Oct 02 19:58:58 crc kubenswrapper[4832]: I1002 19:58:58.706345 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-575ff4d8db-jrg4j_a65ae528-fb46-44a4-a3a3-543acfb646a9/barbican-api-log/0.log" Oct 02 19:58:58 crc kubenswrapper[4832]: I1002 19:58:58.714749 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-575ff4d8db-jrg4j_a65ae528-fb46-44a4-a3a3-543acfb646a9/barbican-api/0.log" Oct 02 19:58:58 crc kubenswrapper[4832]: I1002 19:58:58.933927 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-855fbd5c98-k2t4b_32db7ef2-6bb9-4834-9c9d-3bb13309b0e9/barbican-keystone-listener/0.log" Oct 02 19:58:59 crc kubenswrapper[4832]: I1002 19:58:59.082134 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-855fbd5c98-k2t4b_32db7ef2-6bb9-4834-9c9d-3bb13309b0e9/barbican-keystone-listener-log/0.log" Oct 02 19:58:59 crc kubenswrapper[4832]: I1002 19:58:59.179719 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fd9c9bb87-rf75p_f944fb96-3cf4-42b3-b5b8-3da8dc107d7c/barbican-worker/0.log" Oct 02 19:58:59 crc kubenswrapper[4832]: I1002 19:58:59.310349 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fd9c9bb87-rf75p_f944fb96-3cf4-42b3-b5b8-3da8dc107d7c/barbican-worker-log/0.log" Oct 02 19:58:59 crc kubenswrapper[4832]: I1002 19:58:59.420282 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd_92baed54-227c-474f-ad5c-b8c14493d2d5/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:58:59 crc kubenswrapper[4832]: I1002 19:58:59.652037 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8a0ac381-9d1a-4068-b5bb-350b3979485e/ceilometer-notification-agent/0.log" Oct 02 19:58:59 crc kubenswrapper[4832]: I1002 19:58:59.675479 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8a0ac381-9d1a-4068-b5bb-350b3979485e/ceilometer-central-agent/0.log" Oct 02 19:58:59 crc kubenswrapper[4832]: I1002 19:58:59.883242 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8a0ac381-9d1a-4068-b5bb-350b3979485e/sg-core/0.log" Oct 02 19:58:59 crc kubenswrapper[4832]: I1002 19:58:59.921329 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8a0ac381-9d1a-4068-b5bb-350b3979485e/proxy-httpd/0.log" Oct 02 19:59:00 crc kubenswrapper[4832]: I1002 19:59:00.144784 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_528718cd-4242-48d1-be69-6637022d4c84/cinder-api-log/0.log" Oct 02 19:59:00 crc kubenswrapper[4832]: I1002 19:59:00.203506 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_528718cd-4242-48d1-be69-6637022d4c84/cinder-api/0.log" Oct 02 19:59:00 crc kubenswrapper[4832]: I1002 19:59:00.593489 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c154f010-097e-4cd5-8833-798bce95b715/cinder-scheduler/0.log" Oct 02 19:59:00 crc kubenswrapper[4832]: I1002 19:59:00.702381 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c154f010-097e-4cd5-8833-798bce95b715/probe/0.log" Oct 02 19:59:00 crc kubenswrapper[4832]: I1002 19:59:00.878409 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-px9xz_5610cb4e-4f23-4a76-b59c-5e3db6b532ff/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:00 crc kubenswrapper[4832]: I1002 19:59:00.959302 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-fznkg_691f5920-3afd-4cf0-8ccb-61d2bbff10c2/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:01 crc kubenswrapper[4832]: I1002 19:59:01.221710 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xtftr_8b8c6e59-47c8-4051-a398-3f3d6739d15d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:01 crc kubenswrapper[4832]: I1002 19:59:01.281055 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-qvcnk_2e85de4e-7cb3-48e0-86f7-3faaf7e067d1/init/0.log" Oct 02 19:59:01 crc kubenswrapper[4832]: I1002 19:59:01.468502 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-qvcnk_2e85de4e-7cb3-48e0-86f7-3faaf7e067d1/init/0.log" Oct 02 19:59:01 crc kubenswrapper[4832]: I1002 19:59:01.520897 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-qvcnk_2e85de4e-7cb3-48e0-86f7-3faaf7e067d1/dnsmasq-dns/0.log" Oct 02 19:59:01 crc kubenswrapper[4832]: I1002 19:59:01.544644 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm_6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:02 crc kubenswrapper[4832]: I1002 19:59:02.519136 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b7d0ad2c-59e0-4aee-930a-560d811c393c/glance-log/0.log" Oct 02 19:59:02 crc kubenswrapper[4832]: I1002 19:59:02.526081 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b7d0ad2c-59e0-4aee-930a-560d811c393c/glance-httpd/0.log" Oct 02 19:59:02 crc kubenswrapper[4832]: I1002 19:59:02.842420 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b3190ea6-2c6f-4fb9-a33a-768462224416/glance-log/0.log" Oct 02 19:59:02 crc kubenswrapper[4832]: I1002 19:59:02.881724 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b3190ea6-2c6f-4fb9-a33a-768462224416/glance-httpd/0.log" Oct 02 19:59:03 crc kubenswrapper[4832]: I1002 19:59:03.473457 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5779d8467c-rr8wn_fb6c24b8-fca2-49c2-8f1c-a41614962b83/heat-engine/0.log" Oct 02 19:59:03 crc kubenswrapper[4832]: I1002 19:59:03.728869 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5b65db8df4-nckpl_be959889-fe35-4de3-b7b2-82df67812b7d/heat-api/0.log" Oct 02 19:59:03 crc kubenswrapper[4832]: I1002 19:59:03.948630 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-698cc5cc6c-gmw7p_138ff508-ca7b-4291-8f0d-90ddc11770fb/heat-cfnapi/0.log" Oct 02 19:59:04 crc kubenswrapper[4832]: I1002 19:59:04.469484 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t_61f0ae54-7250-4cc8-9b15-10d1be6c5d31/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:04 crc kubenswrapper[4832]: I1002 19:59:04.593032 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-47m5t_a8da994a-7b15-400a-8316-27a8c28cafe1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:04 crc kubenswrapper[4832]: I1002 19:59:04.870352 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29323861-lrrgj_d89bc766-c21f-4c7e-a092-3e1db2ed4c9d/keystone-cron/0.log" Oct 02 19:59:05 crc kubenswrapper[4832]: I1002 19:59:05.117569 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ecd10228-8f4c-46ea-946d-838bc37b46cc/kube-state-metrics/0.log" Oct 02 19:59:05 crc kubenswrapper[4832]: I1002 19:59:05.168780 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-54ddcb9945-p7pkt_e632994f-7397-4c6f-950a-bcdff946d4e2/keystone-api/0.log" Oct 02 19:59:05 crc kubenswrapper[4832]: I1002 19:59:05.266624 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8_087d2e23-e74a-45de-baf2-2ed44a358880/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:05 crc kubenswrapper[4832]: I1002 19:59:05.371446 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-r9c88_9e0f6923-879e-41f9-9c8b-f0cfede7221f/logging-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:05 crc kubenswrapper[4832]: I1002 19:59:05.632335 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_6888060d-2a19-41ee-ac4d-06a28c11a0f6/mysqld-exporter/0.log" Oct 02 19:59:06 crc kubenswrapper[4832]: I1002 19:59:06.079925 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68769b5c9-9g8wt_eba53986-08b2-4e79-b3d9-85367ff7d816/neutron-httpd/0.log" Oct 02 19:59:06 crc kubenswrapper[4832]: I1002 19:59:06.156767 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68769b5c9-9g8wt_eba53986-08b2-4e79-b3d9-85367ff7d816/neutron-api/0.log" Oct 02 19:59:06 crc kubenswrapper[4832]: I1002 19:59:06.373200 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw_09555253-1acb-4af2-a44c-a2a5612465ff/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:07 crc kubenswrapper[4832]: I1002 19:59:07.108395 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_46d668ae-13cf-4e3f-a2c4-8b862cdeafcb/nova-cell0-conductor-conductor/0.log" Oct 02 19:59:07 crc kubenswrapper[4832]: I1002 19:59:07.241907 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_02307835-a3c7-4dc6-add1-8c9a6daab69d/nova-api-log/0.log" Oct 02 19:59:07 crc kubenswrapper[4832]: I1002 19:59:07.593025 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_02307835-a3c7-4dc6-add1-8c9a6daab69d/nova-api-api/0.log" Oct 02 19:59:07 crc kubenswrapper[4832]: I1002 19:59:07.619070 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8657ca8f-f47b-476a-96f0-b5f5c313cb61/nova-cell1-conductor-conductor/0.log" Oct 02 19:59:07 crc kubenswrapper[4832]: I1002 19:59:07.910616 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ce0ea362-776c-4b12-b3b6-9f684521d40f/nova-cell1-novncproxy-novncproxy/0.log" Oct 02 19:59:07 crc kubenswrapper[4832]: I1002 19:59:07.985476 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-l5mp2_26e8352e-0e5b-4ee9-83f5-aa3323948a6d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:08 crc kubenswrapper[4832]: I1002 19:59:08.576544 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ca63490c-e0ae-4fc3-89cc-f20f8810c98c/nova-metadata-log/0.log" Oct 02 19:59:09 crc kubenswrapper[4832]: I1002 19:59:09.043112 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_8f93334a-ea76-42f6-9f67-0788fac06f14/nova-scheduler-scheduler/0.log" Oct 02 19:59:09 crc kubenswrapper[4832]: I1002 19:59:09.265729 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3e9a3d78-f055-43d2-9d21-579d4a611d49/mysql-bootstrap/0.log" Oct 02 19:59:09 crc kubenswrapper[4832]: I1002 19:59:09.509796 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3e9a3d78-f055-43d2-9d21-579d4a611d49/mysql-bootstrap/0.log" Oct 02 19:59:09 crc kubenswrapper[4832]: I1002 19:59:09.572123 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3e9a3d78-f055-43d2-9d21-579d4a611d49/galera/0.log" Oct 02 19:59:09 crc kubenswrapper[4832]: I1002 19:59:09.791651 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d6c6d1dc-36df-4b33-8d10-dde52bd65630/mysql-bootstrap/0.log" Oct 02 19:59:10 crc kubenswrapper[4832]: I1002 19:59:10.200644 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d6c6d1dc-36df-4b33-8d10-dde52bd65630/mysql-bootstrap/0.log" Oct 02 19:59:10 crc kubenswrapper[4832]: I1002 19:59:10.311003 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d6c6d1dc-36df-4b33-8d10-dde52bd65630/galera/0.log" Oct 02 19:59:10 crc kubenswrapper[4832]: I1002 19:59:10.605978 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ec4cba1f-e0b4-4901-add4-513dc675408e/openstackclient/0.log" Oct 02 19:59:10 crc kubenswrapper[4832]: I1002 19:59:10.865634 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-6trqf_3533b085-2264-41c9-8feb-d8c6f40fa6c1/ovn-controller/0.log" Oct 02 19:59:11 crc kubenswrapper[4832]: I1002 19:59:11.098162 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-d5thb_cdf2a425-f35e-436a-ad17-c85f29e03490/openstack-network-exporter/0.log" Oct 02 19:59:11 crc kubenswrapper[4832]: I1002 19:59:11.247197 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ca63490c-e0ae-4fc3-89cc-f20f8810c98c/nova-metadata-metadata/0.log" Oct 02 19:59:12 crc kubenswrapper[4832]: I1002 19:59:12.126143 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g6w9z_37ac149f-65bb-4e89-911e-52f0c2434aad/ovsdb-server-init/0.log" Oct 02 19:59:12 crc kubenswrapper[4832]: I1002 19:59:12.306888 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g6w9z_37ac149f-65bb-4e89-911e-52f0c2434aad/ovsdb-server-init/0.log" Oct 02 19:59:12 crc kubenswrapper[4832]: I1002 19:59:12.324033 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g6w9z_37ac149f-65bb-4e89-911e-52f0c2434aad/ovs-vswitchd/0.log" Oct 02 19:59:12 crc kubenswrapper[4832]: I1002 19:59:12.389731 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g6w9z_37ac149f-65bb-4e89-911e-52f0c2434aad/ovsdb-server/0.log" Oct 02 19:59:12 crc kubenswrapper[4832]: I1002 19:59:12.578370 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jwffh_6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:13 crc kubenswrapper[4832]: I1002 19:59:13.079773 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_85cf9359-d7f1-4634-9421-0dffdfb488e0/openstack-network-exporter/0.log" Oct 02 19:59:13 crc kubenswrapper[4832]: I1002 19:59:13.080577 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_85cf9359-d7f1-4634-9421-0dffdfb488e0/ovn-northd/0.log" Oct 02 19:59:13 crc kubenswrapper[4832]: I1002 19:59:13.307199 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ccf82d19-ed89-43fc-b2e0-5b8d871db17a/ovsdbserver-nb/0.log" Oct 02 19:59:13 crc kubenswrapper[4832]: I1002 19:59:13.325069 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ccf82d19-ed89-43fc-b2e0-5b8d871db17a/openstack-network-exporter/0.log" Oct 02 19:59:14 crc kubenswrapper[4832]: I1002 19:59:14.022361 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_04d55a7f-36c2-4f79-9541-3e0bf14963ca/ovsdbserver-sb/0.log" Oct 02 19:59:14 crc kubenswrapper[4832]: I1002 19:59:14.043590 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_04d55a7f-36c2-4f79-9541-3e0bf14963ca/openstack-network-exporter/0.log" Oct 02 19:59:14 crc kubenswrapper[4832]: I1002 19:59:14.420000 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7975695b86-g5x7n_9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f/placement-api/0.log" Oct 02 19:59:14 crc kubenswrapper[4832]: I1002 19:59:14.461994 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7975695b86-g5x7n_9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f/placement-log/0.log" Oct 02 19:59:14 crc kubenswrapper[4832]: I1002 19:59:14.641494 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_091b8e1f-4994-4bc6-8be4-c5a44668e088/init-config-reloader/0.log" Oct 02 19:59:14 crc kubenswrapper[4832]: I1002 19:59:14.852399 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_091b8e1f-4994-4bc6-8be4-c5a44668e088/init-config-reloader/0.log" Oct 02 19:59:14 crc kubenswrapper[4832]: I1002 19:59:14.855242 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_091b8e1f-4994-4bc6-8be4-c5a44668e088/config-reloader/0.log" Oct 02 19:59:14 crc kubenswrapper[4832]: I1002 19:59:14.856625 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_091b8e1f-4994-4bc6-8be4-c5a44668e088/prometheus/0.log" Oct 02 19:59:15 crc kubenswrapper[4832]: I1002 19:59:15.064653 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c87efd10-3959-4dfa-ab6a-88810fe9a0fa/setup-container/0.log" Oct 02 19:59:15 crc kubenswrapper[4832]: I1002 19:59:15.081075 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_091b8e1f-4994-4bc6-8be4-c5a44668e088/thanos-sidecar/0.log" Oct 02 19:59:15 crc kubenswrapper[4832]: I1002 19:59:15.408079 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c87efd10-3959-4dfa-ab6a-88810fe9a0fa/setup-container/0.log" Oct 02 19:59:15 crc kubenswrapper[4832]: I1002 19:59:15.519120 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c87efd10-3959-4dfa-ab6a-88810fe9a0fa/rabbitmq/0.log" Oct 02 19:59:15 crc kubenswrapper[4832]: I1002 19:59:15.694720 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9ab42783-2e22-4b2f-9fab-be96ba65e345/setup-container/0.log" Oct 02 19:59:15 crc kubenswrapper[4832]: I1002 19:59:15.837169 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9ab42783-2e22-4b2f-9fab-be96ba65e345/setup-container/0.log" Oct 02 19:59:15 crc kubenswrapper[4832]: I1002 19:59:15.943088 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9ab42783-2e22-4b2f-9fab-be96ba65e345/rabbitmq/0.log" Oct 02 19:59:16 crc kubenswrapper[4832]: I1002 19:59:16.069369 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh_c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:16 crc kubenswrapper[4832]: I1002 19:59:16.182656 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xzpxr_5cf50fba-3a89-451f-adfe-f64eb401d544/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:16 crc kubenswrapper[4832]: I1002 19:59:16.478488 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch_4f9739db-9008-4848-bbc0-ddaa4da9c9b8/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:16 crc kubenswrapper[4832]: I1002 19:59:16.577339 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-6jxlc_d5518272-a1ba-495e-8634-43ce4c08d705/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:16 crc kubenswrapper[4832]: I1002 19:59:16.734143 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-d4vj4_03a71b8f-0cad-40ab-8092-51c6e380b13d/ssh-known-hosts-edpm-deployment/0.log" Oct 02 19:59:16 crc kubenswrapper[4832]: I1002 19:59:16.966324 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_84630b52-3d82-4ca3-aa26-0bf1b7ead64d/memcached/0.log" Oct 02 19:59:17 crc kubenswrapper[4832]: I1002 19:59:17.353837 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5f6f67fd59-pbxsj_cffb41da-c1fe-465d-8ddc-9df65cc50a51/proxy-server/0.log" Oct 02 19:59:17 crc kubenswrapper[4832]: I1002 19:59:17.358051 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5f6f67fd59-pbxsj_cffb41da-c1fe-465d-8ddc-9df65cc50a51/proxy-httpd/0.log" Oct 02 19:59:17 crc kubenswrapper[4832]: I1002 19:59:17.392888 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7zhzt_3f58f07d-fb3b-4be8-a9b0-221aa5c01316/swift-ring-rebalance/0.log" Oct 02 19:59:17 crc kubenswrapper[4832]: I1002 19:59:17.534942 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/account-auditor/0.log" Oct 02 19:59:17 crc kubenswrapper[4832]: I1002 19:59:17.634110 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/account-reaper/0.log" Oct 02 19:59:17 crc kubenswrapper[4832]: I1002 19:59:17.755702 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/account-replicator/0.log" Oct 02 19:59:17 crc kubenswrapper[4832]: I1002 19:59:17.854979 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/account-server/0.log" Oct 02 19:59:17 crc kubenswrapper[4832]: I1002 19:59:17.889447 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/container-auditor/0.log" Oct 02 19:59:17 crc kubenswrapper[4832]: I1002 19:59:17.912673 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/container-replicator/0.log" Oct 02 19:59:17 crc kubenswrapper[4832]: I1002 19:59:17.944361 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/container-server/0.log" Oct 02 19:59:18 crc kubenswrapper[4832]: I1002 19:59:18.086237 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/container-updater/0.log" Oct 02 19:59:18 crc kubenswrapper[4832]: I1002 19:59:18.096441 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/object-auditor/0.log" Oct 02 19:59:18 crc kubenswrapper[4832]: I1002 19:59:18.125229 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/object-expirer/0.log" Oct 02 19:59:18 crc kubenswrapper[4832]: I1002 19:59:18.178441 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/object-replicator/0.log" Oct 02 19:59:18 crc kubenswrapper[4832]: I1002 19:59:18.341246 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/object-updater/0.log" Oct 02 19:59:18 crc kubenswrapper[4832]: I1002 19:59:18.345711 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/object-server/0.log" Oct 02 19:59:18 crc kubenswrapper[4832]: I1002 19:59:18.348693 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/rsync/0.log" Oct 02 19:59:18 crc kubenswrapper[4832]: I1002 19:59:18.395090 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/swift-recon-cron/0.log" Oct 02 19:59:18 crc kubenswrapper[4832]: I1002 19:59:18.571600 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4_29281442-d5d6-4c9c-b24d-82c29d04990e/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:18 crc kubenswrapper[4832]: I1002 19:59:18.659720 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg_0afc2c94-7e28-4344-b4be-807607a5c0e4/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:18 crc kubenswrapper[4832]: I1002 19:59:18.928626 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_29704afa-00e2-4921-92a4-9fe6f0d9e6e5/test-operator-logs-container/0.log" Oct 02 19:59:19 crc kubenswrapper[4832]: I1002 19:59:19.109715 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf_06a947a2-8fbe-4cf3-84d5-cf24e83a6e30/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 19:59:19 crc kubenswrapper[4832]: I1002 19:59:19.336882 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_040c96d0-9636-499a-9986-fb79a73e7b2d/tempest-tests-tempest-tests-runner/0.log" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.184569 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6hpdq"] Oct 02 19:59:34 crc kubenswrapper[4832]: E1002 19:59:34.186859 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerName="extract-content" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.187953 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerName="extract-content" Oct 02 19:59:34 crc kubenswrapper[4832]: E1002 19:59:34.188052 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerName="registry-server" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.188146 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerName="registry-server" Oct 02 19:59:34 crc kubenswrapper[4832]: E1002 19:59:34.188251 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerName="extract-utilities" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.188350 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerName="extract-utilities" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.188750 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7abc46-2afb-4c91-bcbd-eb7c0ee29a4f" containerName="registry-server" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.203489 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.228528 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6hpdq"] Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.271205 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgrpf\" (UniqueName: \"kubernetes.io/projected/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-kube-api-access-wgrpf\") pod \"certified-operators-6hpdq\" (UID: \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\") " pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.271417 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-utilities\") pod \"certified-operators-6hpdq\" (UID: \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\") " pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.271452 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-catalog-content\") pod \"certified-operators-6hpdq\" (UID: \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\") " pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.373319 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgrpf\" (UniqueName: \"kubernetes.io/projected/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-kube-api-access-wgrpf\") pod \"certified-operators-6hpdq\" (UID: \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\") " pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.373582 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-utilities\") pod \"certified-operators-6hpdq\" (UID: \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\") " pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.373627 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-catalog-content\") pod \"certified-operators-6hpdq\" (UID: \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\") " pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.377488 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-catalog-content\") pod \"certified-operators-6hpdq\" (UID: \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\") " pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.397735 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-utilities\") pod \"certified-operators-6hpdq\" (UID: \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\") " pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.410352 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgrpf\" (UniqueName: \"kubernetes.io/projected/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-kube-api-access-wgrpf\") pod \"certified-operators-6hpdq\" (UID: \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\") " pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:34 crc kubenswrapper[4832]: I1002 19:59:34.532128 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:35 crc kubenswrapper[4832]: I1002 19:59:35.288303 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6hpdq"] Oct 02 19:59:35 crc kubenswrapper[4832]: I1002 19:59:35.538852 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hpdq" event={"ID":"dec187fb-e7cc-4d9b-97ec-0e9c3d841654","Type":"ContainerStarted","Data":"e39567fb42496da65496dff350dc0f153aabe83ebe10a60146fa1796a61f6794"} Oct 02 19:59:36 crc kubenswrapper[4832]: I1002 19:59:36.552122 4832 generic.go:334] "Generic (PLEG): container finished" podID="dec187fb-e7cc-4d9b-97ec-0e9c3d841654" containerID="86bda63cdfef4daf6b899d758bccbbe30fb3860c19b67d7737c7ae840a85ced1" exitCode=0 Oct 02 19:59:36 crc kubenswrapper[4832]: I1002 19:59:36.552221 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hpdq" event={"ID":"dec187fb-e7cc-4d9b-97ec-0e9c3d841654","Type":"ContainerDied","Data":"86bda63cdfef4daf6b899d758bccbbe30fb3860c19b67d7737c7ae840a85ced1"} Oct 02 19:59:38 crc kubenswrapper[4832]: I1002 19:59:38.591984 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hpdq" event={"ID":"dec187fb-e7cc-4d9b-97ec-0e9c3d841654","Type":"ContainerStarted","Data":"dcbf96281ffedd59ddd065c7b263721bb0586a31a68b09228da47adb09494e4e"} Oct 02 19:59:41 crc kubenswrapper[4832]: I1002 19:59:41.637308 4832 generic.go:334] "Generic (PLEG): container finished" podID="dec187fb-e7cc-4d9b-97ec-0e9c3d841654" containerID="dcbf96281ffedd59ddd065c7b263721bb0586a31a68b09228da47adb09494e4e" exitCode=0 Oct 02 19:59:41 crc kubenswrapper[4832]: I1002 19:59:41.637389 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hpdq" event={"ID":"dec187fb-e7cc-4d9b-97ec-0e9c3d841654","Type":"ContainerDied","Data":"dcbf96281ffedd59ddd065c7b263721bb0586a31a68b09228da47adb09494e4e"} Oct 02 19:59:42 crc kubenswrapper[4832]: I1002 19:59:42.656219 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hpdq" event={"ID":"dec187fb-e7cc-4d9b-97ec-0e9c3d841654","Type":"ContainerStarted","Data":"2c1143d9b70c065387ae7e02f8e7870725e584030f61cb1bf4bb9a3cfa21ebbc"} Oct 02 19:59:42 crc kubenswrapper[4832]: I1002 19:59:42.695823 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6hpdq" podStartSLOduration=3.164592171 podStartE2EDuration="8.695793409s" podCreationTimestamp="2025-10-02 19:59:34 +0000 UTC" firstStartedPulling="2025-10-02 19:59:36.556642477 +0000 UTC m=+5933.526085349" lastFinishedPulling="2025-10-02 19:59:42.087843715 +0000 UTC m=+5939.057286587" observedRunningTime="2025-10-02 19:59:42.67990574 +0000 UTC m=+5939.649348612" watchObservedRunningTime="2025-10-02 19:59:42.695793409 +0000 UTC m=+5939.665236291" Oct 02 19:59:44 crc kubenswrapper[4832]: I1002 19:59:44.532520 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:44 crc kubenswrapper[4832]: I1002 19:59:44.532911 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:45 crc kubenswrapper[4832]: I1002 19:59:45.606293 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6hpdq" podUID="dec187fb-e7cc-4d9b-97ec-0e9c3d841654" containerName="registry-server" probeResult="failure" output=< Oct 02 19:59:45 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 19:59:45 crc kubenswrapper[4832]: > Oct 02 19:59:54 crc kubenswrapper[4832]: I1002 19:59:54.643387 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:54 crc kubenswrapper[4832]: I1002 19:59:54.711993 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:54 crc kubenswrapper[4832]: I1002 19:59:54.894783 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6hpdq"] Oct 02 19:59:55 crc kubenswrapper[4832]: I1002 19:59:55.827581 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6hpdq" podUID="dec187fb-e7cc-4d9b-97ec-0e9c3d841654" containerName="registry-server" containerID="cri-o://2c1143d9b70c065387ae7e02f8e7870725e584030f61cb1bf4bb9a3cfa21ebbc" gracePeriod=2 Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.358377 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.547913 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-catalog-content\") pod \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\" (UID: \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\") " Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.548384 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-utilities\") pod \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\" (UID: \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\") " Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.548575 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgrpf\" (UniqueName: \"kubernetes.io/projected/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-kube-api-access-wgrpf\") pod \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\" (UID: \"dec187fb-e7cc-4d9b-97ec-0e9c3d841654\") " Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.550032 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-utilities" (OuterVolumeSpecName: "utilities") pod "dec187fb-e7cc-4d9b-97ec-0e9c3d841654" (UID: "dec187fb-e7cc-4d9b-97ec-0e9c3d841654"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.556638 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-kube-api-access-wgrpf" (OuterVolumeSpecName: "kube-api-access-wgrpf") pod "dec187fb-e7cc-4d9b-97ec-0e9c3d841654" (UID: "dec187fb-e7cc-4d9b-97ec-0e9c3d841654"). InnerVolumeSpecName "kube-api-access-wgrpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.604529 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dec187fb-e7cc-4d9b-97ec-0e9c3d841654" (UID: "dec187fb-e7cc-4d9b-97ec-0e9c3d841654"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.651391 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgrpf\" (UniqueName: \"kubernetes.io/projected/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-kube-api-access-wgrpf\") on node \"crc\" DevicePath \"\"" Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.651420 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.651444 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dec187fb-e7cc-4d9b-97ec-0e9c3d841654-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.843760 4832 generic.go:334] "Generic (PLEG): container finished" podID="dec187fb-e7cc-4d9b-97ec-0e9c3d841654" containerID="2c1143d9b70c065387ae7e02f8e7870725e584030f61cb1bf4bb9a3cfa21ebbc" exitCode=0 Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.843881 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6hpdq" Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.843876 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hpdq" event={"ID":"dec187fb-e7cc-4d9b-97ec-0e9c3d841654","Type":"ContainerDied","Data":"2c1143d9b70c065387ae7e02f8e7870725e584030f61cb1bf4bb9a3cfa21ebbc"} Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.844080 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6hpdq" event={"ID":"dec187fb-e7cc-4d9b-97ec-0e9c3d841654","Type":"ContainerDied","Data":"e39567fb42496da65496dff350dc0f153aabe83ebe10a60146fa1796a61f6794"} Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.844112 4832 scope.go:117] "RemoveContainer" containerID="2c1143d9b70c065387ae7e02f8e7870725e584030f61cb1bf4bb9a3cfa21ebbc" Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.875409 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.875487 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.880453 4832 scope.go:117] "RemoveContainer" containerID="dcbf96281ffedd59ddd065c7b263721bb0586a31a68b09228da47adb09494e4e" Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.911640 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6hpdq"] Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.927980 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6hpdq"] Oct 02 19:59:56 crc kubenswrapper[4832]: I1002 19:59:56.938147 4832 scope.go:117] "RemoveContainer" containerID="86bda63cdfef4daf6b899d758bccbbe30fb3860c19b67d7737c7ae840a85ced1" Oct 02 19:59:57 crc kubenswrapper[4832]: I1002 19:59:57.003317 4832 scope.go:117] "RemoveContainer" containerID="2c1143d9b70c065387ae7e02f8e7870725e584030f61cb1bf4bb9a3cfa21ebbc" Oct 02 19:59:57 crc kubenswrapper[4832]: E1002 19:59:57.005663 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c1143d9b70c065387ae7e02f8e7870725e584030f61cb1bf4bb9a3cfa21ebbc\": container with ID starting with 2c1143d9b70c065387ae7e02f8e7870725e584030f61cb1bf4bb9a3cfa21ebbc not found: ID does not exist" containerID="2c1143d9b70c065387ae7e02f8e7870725e584030f61cb1bf4bb9a3cfa21ebbc" Oct 02 19:59:57 crc kubenswrapper[4832]: I1002 19:59:57.005750 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c1143d9b70c065387ae7e02f8e7870725e584030f61cb1bf4bb9a3cfa21ebbc"} err="failed to get container status \"2c1143d9b70c065387ae7e02f8e7870725e584030f61cb1bf4bb9a3cfa21ebbc\": rpc error: code = NotFound desc = could not find container \"2c1143d9b70c065387ae7e02f8e7870725e584030f61cb1bf4bb9a3cfa21ebbc\": container with ID starting with 2c1143d9b70c065387ae7e02f8e7870725e584030f61cb1bf4bb9a3cfa21ebbc not found: ID does not exist" Oct 02 19:59:57 crc kubenswrapper[4832]: I1002 19:59:57.005780 4832 scope.go:117] "RemoveContainer" containerID="dcbf96281ffedd59ddd065c7b263721bb0586a31a68b09228da47adb09494e4e" Oct 02 19:59:57 crc kubenswrapper[4832]: E1002 19:59:57.006400 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcbf96281ffedd59ddd065c7b263721bb0586a31a68b09228da47adb09494e4e\": container with ID starting with dcbf96281ffedd59ddd065c7b263721bb0586a31a68b09228da47adb09494e4e not found: ID does not exist" containerID="dcbf96281ffedd59ddd065c7b263721bb0586a31a68b09228da47adb09494e4e" Oct 02 19:59:57 crc kubenswrapper[4832]: I1002 19:59:57.006466 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcbf96281ffedd59ddd065c7b263721bb0586a31a68b09228da47adb09494e4e"} err="failed to get container status \"dcbf96281ffedd59ddd065c7b263721bb0586a31a68b09228da47adb09494e4e\": rpc error: code = NotFound desc = could not find container \"dcbf96281ffedd59ddd065c7b263721bb0586a31a68b09228da47adb09494e4e\": container with ID starting with dcbf96281ffedd59ddd065c7b263721bb0586a31a68b09228da47adb09494e4e not found: ID does not exist" Oct 02 19:59:57 crc kubenswrapper[4832]: I1002 19:59:57.006501 4832 scope.go:117] "RemoveContainer" containerID="86bda63cdfef4daf6b899d758bccbbe30fb3860c19b67d7737c7ae840a85ced1" Oct 02 19:59:57 crc kubenswrapper[4832]: E1002 19:59:57.007094 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86bda63cdfef4daf6b899d758bccbbe30fb3860c19b67d7737c7ae840a85ced1\": container with ID starting with 86bda63cdfef4daf6b899d758bccbbe30fb3860c19b67d7737c7ae840a85ced1 not found: ID does not exist" containerID="86bda63cdfef4daf6b899d758bccbbe30fb3860c19b67d7737c7ae840a85ced1" Oct 02 19:59:57 crc kubenswrapper[4832]: I1002 19:59:57.007143 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86bda63cdfef4daf6b899d758bccbbe30fb3860c19b67d7737c7ae840a85ced1"} err="failed to get container status \"86bda63cdfef4daf6b899d758bccbbe30fb3860c19b67d7737c7ae840a85ced1\": rpc error: code = NotFound desc = could not find container \"86bda63cdfef4daf6b899d758bccbbe30fb3860c19b67d7737c7ae840a85ced1\": container with ID starting with 86bda63cdfef4daf6b899d758bccbbe30fb3860c19b67d7737c7ae840a85ced1 not found: ID does not exist" Oct 02 19:59:57 crc kubenswrapper[4832]: I1002 19:59:57.255386 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec187fb-e7cc-4d9b-97ec-0e9c3d841654" path="/var/lib/kubelet/pods/dec187fb-e7cc-4d9b-97ec-0e9c3d841654/volumes" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.230696 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj"] Oct 02 20:00:00 crc kubenswrapper[4832]: E1002 20:00:00.231965 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec187fb-e7cc-4d9b-97ec-0e9c3d841654" containerName="extract-content" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.231983 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec187fb-e7cc-4d9b-97ec-0e9c3d841654" containerName="extract-content" Oct 02 20:00:00 crc kubenswrapper[4832]: E1002 20:00:00.232006 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec187fb-e7cc-4d9b-97ec-0e9c3d841654" containerName="registry-server" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.232011 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec187fb-e7cc-4d9b-97ec-0e9c3d841654" containerName="registry-server" Oct 02 20:00:00 crc kubenswrapper[4832]: E1002 20:00:00.232027 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec187fb-e7cc-4d9b-97ec-0e9c3d841654" containerName="extract-utilities" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.232034 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec187fb-e7cc-4d9b-97ec-0e9c3d841654" containerName="extract-utilities" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.232307 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec187fb-e7cc-4d9b-97ec-0e9c3d841654" containerName="registry-server" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.233441 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.239803 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.241946 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.275591 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj"] Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.343483 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzfch\" (UniqueName: \"kubernetes.io/projected/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-kube-api-access-pzfch\") pod \"collect-profiles-29323920-2cxdj\" (UID: \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.344390 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-config-volume\") pod \"collect-profiles-29323920-2cxdj\" (UID: \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.344577 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-secret-volume\") pod \"collect-profiles-29323920-2cxdj\" (UID: \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.446446 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzfch\" (UniqueName: \"kubernetes.io/projected/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-kube-api-access-pzfch\") pod \"collect-profiles-29323920-2cxdj\" (UID: \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.446718 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-config-volume\") pod \"collect-profiles-29323920-2cxdj\" (UID: \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.446843 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-secret-volume\") pod \"collect-profiles-29323920-2cxdj\" (UID: \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.447854 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-config-volume\") pod \"collect-profiles-29323920-2cxdj\" (UID: \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.456538 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-secret-volume\") pod \"collect-profiles-29323920-2cxdj\" (UID: \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.476248 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzfch\" (UniqueName: \"kubernetes.io/projected/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-kube-api-access-pzfch\") pod \"collect-profiles-29323920-2cxdj\" (UID: \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" Oct 02 20:00:00 crc kubenswrapper[4832]: I1002 20:00:00.584668 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" Oct 02 20:00:01 crc kubenswrapper[4832]: I1002 20:00:01.124064 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj"] Oct 02 20:00:01 crc kubenswrapper[4832]: I1002 20:00:01.904454 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" event={"ID":"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4","Type":"ContainerStarted","Data":"f5c808c4f9ea8e1c0dcde8b2b1e2fc17ba1178d448a682c2f5900b7a5cc7824e"} Oct 02 20:00:01 crc kubenswrapper[4832]: I1002 20:00:01.904784 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" event={"ID":"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4","Type":"ContainerStarted","Data":"4b314853511f07d12d32b528083bd88b16316ce1c1b6af348da89512b06bf054"} Oct 02 20:00:01 crc kubenswrapper[4832]: I1002 20:00:01.958293 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" podStartSLOduration=1.958253198 podStartE2EDuration="1.958253198s" podCreationTimestamp="2025-10-02 20:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:00:01.922316653 +0000 UTC m=+5958.891759565" watchObservedRunningTime="2025-10-02 20:00:01.958253198 +0000 UTC m=+5958.927696070" Oct 02 20:00:02 crc kubenswrapper[4832]: I1002 20:00:02.917493 4832 generic.go:334] "Generic (PLEG): container finished" podID="2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4" containerID="f5c808c4f9ea8e1c0dcde8b2b1e2fc17ba1178d448a682c2f5900b7a5cc7824e" exitCode=0 Oct 02 20:00:02 crc kubenswrapper[4832]: I1002 20:00:02.917585 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" event={"ID":"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4","Type":"ContainerDied","Data":"f5c808c4f9ea8e1c0dcde8b2b1e2fc17ba1178d448a682c2f5900b7a5cc7824e"} Oct 02 20:00:04 crc kubenswrapper[4832]: I1002 20:00:04.553580 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" Oct 02 20:00:04 crc kubenswrapper[4832]: I1002 20:00:04.661192 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzfch\" (UniqueName: \"kubernetes.io/projected/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-kube-api-access-pzfch\") pod \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\" (UID: \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\") " Oct 02 20:00:04 crc kubenswrapper[4832]: I1002 20:00:04.661497 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-secret-volume\") pod \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\" (UID: \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\") " Oct 02 20:00:04 crc kubenswrapper[4832]: I1002 20:00:04.663217 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-config-volume\") pod \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\" (UID: \"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4\") " Oct 02 20:00:04 crc kubenswrapper[4832]: I1002 20:00:04.664910 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-config-volume" (OuterVolumeSpecName: "config-volume") pod "2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4" (UID: "2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 20:00:04 crc kubenswrapper[4832]: I1002 20:00:04.668393 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4" (UID: "2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 20:00:04 crc kubenswrapper[4832]: I1002 20:00:04.668506 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-kube-api-access-pzfch" (OuterVolumeSpecName: "kube-api-access-pzfch") pod "2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4" (UID: "2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4"). InnerVolumeSpecName "kube-api-access-pzfch". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:00:04 crc kubenswrapper[4832]: I1002 20:00:04.766421 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 02 20:00:04 crc kubenswrapper[4832]: I1002 20:00:04.766461 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 20:00:04 crc kubenswrapper[4832]: I1002 20:00:04.766475 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzfch\" (UniqueName: \"kubernetes.io/projected/2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4-kube-api-access-pzfch\") on node \"crc\" DevicePath \"\"" Oct 02 20:00:04 crc kubenswrapper[4832]: I1002 20:00:04.949286 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" event={"ID":"2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4","Type":"ContainerDied","Data":"4b314853511f07d12d32b528083bd88b16316ce1c1b6af348da89512b06bf054"} Oct 02 20:00:04 crc kubenswrapper[4832]: I1002 20:00:04.949629 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b314853511f07d12d32b528083bd88b16316ce1c1b6af348da89512b06bf054" Oct 02 20:00:04 crc kubenswrapper[4832]: I1002 20:00:04.949349 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323920-2cxdj" Oct 02 20:00:05 crc kubenswrapper[4832]: I1002 20:00:05.044217 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6"] Oct 02 20:00:05 crc kubenswrapper[4832]: I1002 20:00:05.053912 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323875-8mtr6"] Oct 02 20:00:05 crc kubenswrapper[4832]: I1002 20:00:05.239417 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80f707fb-5732-4d23-be51-ba82116f8e1e" path="/var/lib/kubelet/pods/80f707fb-5732-4d23-be51-ba82116f8e1e/volumes" Oct 02 20:00:24 crc kubenswrapper[4832]: I1002 20:00:24.219285 4832 scope.go:117] "RemoveContainer" containerID="1ea336185c8768f4d6dc1a5a9b3e9314f6edc58a74c2936ad45f7072aea93062" Oct 02 20:00:26 crc kubenswrapper[4832]: I1002 20:00:26.875840 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 20:00:26 crc kubenswrapper[4832]: I1002 20:00:26.876472 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 20:00:34 crc kubenswrapper[4832]: I1002 20:00:34.307169 4832 generic.go:334] "Generic (PLEG): container finished" podID="e1a9ba76-0d0b-4504-a9da-995b5df88d5c" containerID="418e96607960afd757cb1a3f2d08700ea79f6695fc18ff4ffd089890f31cc523" exitCode=0 Oct 02 20:00:34 crc kubenswrapper[4832]: I1002 20:00:34.307385 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" event={"ID":"e1a9ba76-0d0b-4504-a9da-995b5df88d5c","Type":"ContainerDied","Data":"418e96607960afd757cb1a3f2d08700ea79f6695fc18ff4ffd089890f31cc523"} Oct 02 20:00:35 crc kubenswrapper[4832]: I1002 20:00:35.451078 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" Oct 02 20:00:35 crc kubenswrapper[4832]: I1002 20:00:35.497452 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6gqdd/crc-debug-bzhtk"] Oct 02 20:00:35 crc kubenswrapper[4832]: I1002 20:00:35.508132 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6gqdd/crc-debug-bzhtk"] Oct 02 20:00:35 crc kubenswrapper[4832]: I1002 20:00:35.577278 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1a9ba76-0d0b-4504-a9da-995b5df88d5c-host\") pod \"e1a9ba76-0d0b-4504-a9da-995b5df88d5c\" (UID: \"e1a9ba76-0d0b-4504-a9da-995b5df88d5c\") " Oct 02 20:00:35 crc kubenswrapper[4832]: I1002 20:00:35.577419 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhrxk\" (UniqueName: \"kubernetes.io/projected/e1a9ba76-0d0b-4504-a9da-995b5df88d5c-kube-api-access-vhrxk\") pod \"e1a9ba76-0d0b-4504-a9da-995b5df88d5c\" (UID: \"e1a9ba76-0d0b-4504-a9da-995b5df88d5c\") " Oct 02 20:00:35 crc kubenswrapper[4832]: I1002 20:00:35.577629 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1a9ba76-0d0b-4504-a9da-995b5df88d5c-host" (OuterVolumeSpecName: "host") pod "e1a9ba76-0d0b-4504-a9da-995b5df88d5c" (UID: "e1a9ba76-0d0b-4504-a9da-995b5df88d5c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 20:00:35 crc kubenswrapper[4832]: I1002 20:00:35.578114 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e1a9ba76-0d0b-4504-a9da-995b5df88d5c-host\") on node \"crc\" DevicePath \"\"" Oct 02 20:00:35 crc kubenswrapper[4832]: I1002 20:00:35.583392 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a9ba76-0d0b-4504-a9da-995b5df88d5c-kube-api-access-vhrxk" (OuterVolumeSpecName: "kube-api-access-vhrxk") pod "e1a9ba76-0d0b-4504-a9da-995b5df88d5c" (UID: "e1a9ba76-0d0b-4504-a9da-995b5df88d5c"). InnerVolumeSpecName "kube-api-access-vhrxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:00:35 crc kubenswrapper[4832]: I1002 20:00:35.680000 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhrxk\" (UniqueName: \"kubernetes.io/projected/e1a9ba76-0d0b-4504-a9da-995b5df88d5c-kube-api-access-vhrxk\") on node \"crc\" DevicePath \"\"" Oct 02 20:00:36 crc kubenswrapper[4832]: I1002 20:00:36.329752 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbe4ed314a889f2968de3b26dde7b83b456e2bf1b66462df425fce93855d2a87" Oct 02 20:00:36 crc kubenswrapper[4832]: I1002 20:00:36.329801 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/crc-debug-bzhtk" Oct 02 20:00:36 crc kubenswrapper[4832]: I1002 20:00:36.679746 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6gqdd/crc-debug-nschj"] Oct 02 20:00:36 crc kubenswrapper[4832]: E1002 20:00:36.680551 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a9ba76-0d0b-4504-a9da-995b5df88d5c" containerName="container-00" Oct 02 20:00:36 crc kubenswrapper[4832]: I1002 20:00:36.680574 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a9ba76-0d0b-4504-a9da-995b5df88d5c" containerName="container-00" Oct 02 20:00:36 crc kubenswrapper[4832]: E1002 20:00:36.680599 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4" containerName="collect-profiles" Oct 02 20:00:36 crc kubenswrapper[4832]: I1002 20:00:36.680612 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4" containerName="collect-profiles" Oct 02 20:00:36 crc kubenswrapper[4832]: I1002 20:00:36.681077 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb89f54-7ca9-43e7-8372-6d2e4bf2d9e4" containerName="collect-profiles" Oct 02 20:00:36 crc kubenswrapper[4832]: I1002 20:00:36.681116 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a9ba76-0d0b-4504-a9da-995b5df88d5c" containerName="container-00" Oct 02 20:00:36 crc kubenswrapper[4832]: I1002 20:00:36.682602 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/crc-debug-nschj" Oct 02 20:00:36 crc kubenswrapper[4832]: I1002 20:00:36.811501 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqjtd\" (UniqueName: \"kubernetes.io/projected/b877024e-13f3-42f5-acee-1eb96abb9b36-kube-api-access-fqjtd\") pod \"crc-debug-nschj\" (UID: \"b877024e-13f3-42f5-acee-1eb96abb9b36\") " pod="openshift-must-gather-6gqdd/crc-debug-nschj" Oct 02 20:00:36 crc kubenswrapper[4832]: I1002 20:00:36.811722 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b877024e-13f3-42f5-acee-1eb96abb9b36-host\") pod \"crc-debug-nschj\" (UID: \"b877024e-13f3-42f5-acee-1eb96abb9b36\") " pod="openshift-must-gather-6gqdd/crc-debug-nschj" Oct 02 20:00:36 crc kubenswrapper[4832]: I1002 20:00:36.914931 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqjtd\" (UniqueName: \"kubernetes.io/projected/b877024e-13f3-42f5-acee-1eb96abb9b36-kube-api-access-fqjtd\") pod \"crc-debug-nschj\" (UID: \"b877024e-13f3-42f5-acee-1eb96abb9b36\") " pod="openshift-must-gather-6gqdd/crc-debug-nschj" Oct 02 20:00:36 crc kubenswrapper[4832]: I1002 20:00:36.915392 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b877024e-13f3-42f5-acee-1eb96abb9b36-host\") pod \"crc-debug-nschj\" (UID: \"b877024e-13f3-42f5-acee-1eb96abb9b36\") " pod="openshift-must-gather-6gqdd/crc-debug-nschj" Oct 02 20:00:36 crc kubenswrapper[4832]: I1002 20:00:36.915537 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b877024e-13f3-42f5-acee-1eb96abb9b36-host\") pod \"crc-debug-nschj\" (UID: \"b877024e-13f3-42f5-acee-1eb96abb9b36\") " pod="openshift-must-gather-6gqdd/crc-debug-nschj" Oct 02 20:00:36 crc kubenswrapper[4832]: I1002 20:00:36.949063 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqjtd\" (UniqueName: \"kubernetes.io/projected/b877024e-13f3-42f5-acee-1eb96abb9b36-kube-api-access-fqjtd\") pod \"crc-debug-nschj\" (UID: \"b877024e-13f3-42f5-acee-1eb96abb9b36\") " pod="openshift-must-gather-6gqdd/crc-debug-nschj" Oct 02 20:00:37 crc kubenswrapper[4832]: I1002 20:00:37.002132 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/crc-debug-nschj" Oct 02 20:00:37 crc kubenswrapper[4832]: I1002 20:00:37.238864 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a9ba76-0d0b-4504-a9da-995b5df88d5c" path="/var/lib/kubelet/pods/e1a9ba76-0d0b-4504-a9da-995b5df88d5c/volumes" Oct 02 20:00:37 crc kubenswrapper[4832]: I1002 20:00:37.348873 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6gqdd/crc-debug-nschj" event={"ID":"b877024e-13f3-42f5-acee-1eb96abb9b36","Type":"ContainerStarted","Data":"278cc3a117e1b1ebd6d5c65bc171378a0e84281fb2e5d9415ae1fe3ed8d98856"} Oct 02 20:00:37 crc kubenswrapper[4832]: I1002 20:00:37.372334 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6gqdd/crc-debug-nschj" podStartSLOduration=1.372306486 podStartE2EDuration="1.372306486s" podCreationTimestamp="2025-10-02 20:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:00:37.364342189 +0000 UTC m=+5994.333785121" watchObservedRunningTime="2025-10-02 20:00:37.372306486 +0000 UTC m=+5994.341749398" Oct 02 20:00:38 crc kubenswrapper[4832]: I1002 20:00:38.361649 4832 generic.go:334] "Generic (PLEG): container finished" podID="b877024e-13f3-42f5-acee-1eb96abb9b36" containerID="36fad3bf52d19a29b72b34161016db9c1e273eaa7201a6f863d742ca612d52fb" exitCode=0 Oct 02 20:00:38 crc kubenswrapper[4832]: I1002 20:00:38.361761 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6gqdd/crc-debug-nschj" event={"ID":"b877024e-13f3-42f5-acee-1eb96abb9b36","Type":"ContainerDied","Data":"36fad3bf52d19a29b72b34161016db9c1e273eaa7201a6f863d742ca612d52fb"} Oct 02 20:00:39 crc kubenswrapper[4832]: I1002 20:00:39.507293 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/crc-debug-nschj" Oct 02 20:00:39 crc kubenswrapper[4832]: I1002 20:00:39.695733 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqjtd\" (UniqueName: \"kubernetes.io/projected/b877024e-13f3-42f5-acee-1eb96abb9b36-kube-api-access-fqjtd\") pod \"b877024e-13f3-42f5-acee-1eb96abb9b36\" (UID: \"b877024e-13f3-42f5-acee-1eb96abb9b36\") " Oct 02 20:00:39 crc kubenswrapper[4832]: I1002 20:00:39.696064 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b877024e-13f3-42f5-acee-1eb96abb9b36-host\") pod \"b877024e-13f3-42f5-acee-1eb96abb9b36\" (UID: \"b877024e-13f3-42f5-acee-1eb96abb9b36\") " Oct 02 20:00:39 crc kubenswrapper[4832]: I1002 20:00:39.696167 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b877024e-13f3-42f5-acee-1eb96abb9b36-host" (OuterVolumeSpecName: "host") pod "b877024e-13f3-42f5-acee-1eb96abb9b36" (UID: "b877024e-13f3-42f5-acee-1eb96abb9b36"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 20:00:39 crc kubenswrapper[4832]: I1002 20:00:39.696736 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b877024e-13f3-42f5-acee-1eb96abb9b36-host\") on node \"crc\" DevicePath \"\"" Oct 02 20:00:39 crc kubenswrapper[4832]: I1002 20:00:39.706509 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b877024e-13f3-42f5-acee-1eb96abb9b36-kube-api-access-fqjtd" (OuterVolumeSpecName: "kube-api-access-fqjtd") pod "b877024e-13f3-42f5-acee-1eb96abb9b36" (UID: "b877024e-13f3-42f5-acee-1eb96abb9b36"). InnerVolumeSpecName "kube-api-access-fqjtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:00:39 crc kubenswrapper[4832]: I1002 20:00:39.799294 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqjtd\" (UniqueName: \"kubernetes.io/projected/b877024e-13f3-42f5-acee-1eb96abb9b36-kube-api-access-fqjtd\") on node \"crc\" DevicePath \"\"" Oct 02 20:00:40 crc kubenswrapper[4832]: I1002 20:00:40.383897 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6gqdd/crc-debug-nschj" event={"ID":"b877024e-13f3-42f5-acee-1eb96abb9b36","Type":"ContainerDied","Data":"278cc3a117e1b1ebd6d5c65bc171378a0e84281fb2e5d9415ae1fe3ed8d98856"} Oct 02 20:00:40 crc kubenswrapper[4832]: I1002 20:00:40.383947 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="278cc3a117e1b1ebd6d5c65bc171378a0e84281fb2e5d9415ae1fe3ed8d98856" Oct 02 20:00:40 crc kubenswrapper[4832]: I1002 20:00:40.383961 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/crc-debug-nschj" Oct 02 20:00:47 crc kubenswrapper[4832]: I1002 20:00:47.897975 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6gqdd/crc-debug-nschj"] Oct 02 20:00:47 crc kubenswrapper[4832]: I1002 20:00:47.911831 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6gqdd/crc-debug-nschj"] Oct 02 20:00:49 crc kubenswrapper[4832]: I1002 20:00:49.109121 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6gqdd/crc-debug-bgc7x"] Oct 02 20:00:49 crc kubenswrapper[4832]: E1002 20:00:49.110928 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b877024e-13f3-42f5-acee-1eb96abb9b36" containerName="container-00" Oct 02 20:00:49 crc kubenswrapper[4832]: I1002 20:00:49.111293 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b877024e-13f3-42f5-acee-1eb96abb9b36" containerName="container-00" Oct 02 20:00:49 crc kubenswrapper[4832]: I1002 20:00:49.111667 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b877024e-13f3-42f5-acee-1eb96abb9b36" containerName="container-00" Oct 02 20:00:49 crc kubenswrapper[4832]: I1002 20:00:49.112833 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/crc-debug-bgc7x" Oct 02 20:00:49 crc kubenswrapper[4832]: I1002 20:00:49.209843 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plm62\" (UniqueName: \"kubernetes.io/projected/6507acad-7fb1-4699-95c0-1b5f59984640-kube-api-access-plm62\") pod \"crc-debug-bgc7x\" (UID: \"6507acad-7fb1-4699-95c0-1b5f59984640\") " pod="openshift-must-gather-6gqdd/crc-debug-bgc7x" Oct 02 20:00:49 crc kubenswrapper[4832]: I1002 20:00:49.209927 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6507acad-7fb1-4699-95c0-1b5f59984640-host\") pod \"crc-debug-bgc7x\" (UID: \"6507acad-7fb1-4699-95c0-1b5f59984640\") " pod="openshift-must-gather-6gqdd/crc-debug-bgc7x" Oct 02 20:00:49 crc kubenswrapper[4832]: I1002 20:00:49.237860 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b877024e-13f3-42f5-acee-1eb96abb9b36" path="/var/lib/kubelet/pods/b877024e-13f3-42f5-acee-1eb96abb9b36/volumes" Oct 02 20:00:49 crc kubenswrapper[4832]: I1002 20:00:49.315904 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plm62\" (UniqueName: \"kubernetes.io/projected/6507acad-7fb1-4699-95c0-1b5f59984640-kube-api-access-plm62\") pod \"crc-debug-bgc7x\" (UID: \"6507acad-7fb1-4699-95c0-1b5f59984640\") " pod="openshift-must-gather-6gqdd/crc-debug-bgc7x" Oct 02 20:00:49 crc kubenswrapper[4832]: I1002 20:00:49.317139 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6507acad-7fb1-4699-95c0-1b5f59984640-host\") pod \"crc-debug-bgc7x\" (UID: \"6507acad-7fb1-4699-95c0-1b5f59984640\") " pod="openshift-must-gather-6gqdd/crc-debug-bgc7x" Oct 02 20:00:49 crc kubenswrapper[4832]: I1002 20:00:49.317959 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6507acad-7fb1-4699-95c0-1b5f59984640-host\") pod \"crc-debug-bgc7x\" (UID: \"6507acad-7fb1-4699-95c0-1b5f59984640\") " pod="openshift-must-gather-6gqdd/crc-debug-bgc7x" Oct 02 20:00:49 crc kubenswrapper[4832]: I1002 20:00:49.337221 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plm62\" (UniqueName: \"kubernetes.io/projected/6507acad-7fb1-4699-95c0-1b5f59984640-kube-api-access-plm62\") pod \"crc-debug-bgc7x\" (UID: \"6507acad-7fb1-4699-95c0-1b5f59984640\") " pod="openshift-must-gather-6gqdd/crc-debug-bgc7x" Oct 02 20:00:49 crc kubenswrapper[4832]: I1002 20:00:49.430229 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/crc-debug-bgc7x" Oct 02 20:00:49 crc kubenswrapper[4832]: I1002 20:00:49.509388 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6gqdd/crc-debug-bgc7x" event={"ID":"6507acad-7fb1-4699-95c0-1b5f59984640","Type":"ContainerStarted","Data":"248a20d41b6c6b6ba6301cba14b10792de3a218d63ded0b32f990c668b97d44c"} Oct 02 20:00:50 crc kubenswrapper[4832]: I1002 20:00:50.526339 4832 generic.go:334] "Generic (PLEG): container finished" podID="6507acad-7fb1-4699-95c0-1b5f59984640" containerID="d4a2cba3eb78be671aa970622a76d74efb3e3882d8520ec16e248bc3813e99c3" exitCode=0 Oct 02 20:00:50 crc kubenswrapper[4832]: I1002 20:00:50.526399 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6gqdd/crc-debug-bgc7x" event={"ID":"6507acad-7fb1-4699-95c0-1b5f59984640","Type":"ContainerDied","Data":"d4a2cba3eb78be671aa970622a76d74efb3e3882d8520ec16e248bc3813e99c3"} Oct 02 20:00:50 crc kubenswrapper[4832]: I1002 20:00:50.584343 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6gqdd/crc-debug-bgc7x"] Oct 02 20:00:50 crc kubenswrapper[4832]: I1002 20:00:50.603153 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6gqdd/crc-debug-bgc7x"] Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.214472 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p92m4"] Oct 02 20:00:51 crc kubenswrapper[4832]: E1002 20:00:51.218236 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6507acad-7fb1-4699-95c0-1b5f59984640" containerName="container-00" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.218274 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="6507acad-7fb1-4699-95c0-1b5f59984640" containerName="container-00" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.220433 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="6507acad-7fb1-4699-95c0-1b5f59984640" containerName="container-00" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.243423 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.282466 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p92m4"] Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.373737 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h66dr\" (UniqueName: \"kubernetes.io/projected/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-kube-api-access-h66dr\") pod \"redhat-marketplace-p92m4\" (UID: \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\") " pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.373834 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-utilities\") pod \"redhat-marketplace-p92m4\" (UID: \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\") " pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.373943 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-catalog-content\") pod \"redhat-marketplace-p92m4\" (UID: \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\") " pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.476247 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h66dr\" (UniqueName: \"kubernetes.io/projected/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-kube-api-access-h66dr\") pod \"redhat-marketplace-p92m4\" (UID: \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\") " pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.476353 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-utilities\") pod \"redhat-marketplace-p92m4\" (UID: \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\") " pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.476450 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-catalog-content\") pod \"redhat-marketplace-p92m4\" (UID: \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\") " pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.476873 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-utilities\") pod \"redhat-marketplace-p92m4\" (UID: \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\") " pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.476938 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-catalog-content\") pod \"redhat-marketplace-p92m4\" (UID: \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\") " pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.495534 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h66dr\" (UniqueName: \"kubernetes.io/projected/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-kube-api-access-h66dr\") pod \"redhat-marketplace-p92m4\" (UID: \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\") " pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.590873 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.737278 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/crc-debug-bgc7x" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.895232 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plm62\" (UniqueName: \"kubernetes.io/projected/6507acad-7fb1-4699-95c0-1b5f59984640-kube-api-access-plm62\") pod \"6507acad-7fb1-4699-95c0-1b5f59984640\" (UID: \"6507acad-7fb1-4699-95c0-1b5f59984640\") " Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.895426 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6507acad-7fb1-4699-95c0-1b5f59984640-host\") pod \"6507acad-7fb1-4699-95c0-1b5f59984640\" (UID: \"6507acad-7fb1-4699-95c0-1b5f59984640\") " Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.896244 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6507acad-7fb1-4699-95c0-1b5f59984640-host" (OuterVolumeSpecName: "host") pod "6507acad-7fb1-4699-95c0-1b5f59984640" (UID: "6507acad-7fb1-4699-95c0-1b5f59984640"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.911025 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6507acad-7fb1-4699-95c0-1b5f59984640-kube-api-access-plm62" (OuterVolumeSpecName: "kube-api-access-plm62") pod "6507acad-7fb1-4699-95c0-1b5f59984640" (UID: "6507acad-7fb1-4699-95c0-1b5f59984640"). InnerVolumeSpecName "kube-api-access-plm62". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.998039 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6507acad-7fb1-4699-95c0-1b5f59984640-host\") on node \"crc\" DevicePath \"\"" Oct 02 20:00:51 crc kubenswrapper[4832]: I1002 20:00:51.998308 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plm62\" (UniqueName: \"kubernetes.io/projected/6507acad-7fb1-4699-95c0-1b5f59984640-kube-api-access-plm62\") on node \"crc\" DevicePath \"\"" Oct 02 20:00:52 crc kubenswrapper[4832]: I1002 20:00:52.108184 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p92m4"] Oct 02 20:00:52 crc kubenswrapper[4832]: I1002 20:00:52.559867 4832 generic.go:334] "Generic (PLEG): container finished" podID="f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" containerID="9ca4b8f930b341b597b430b862dfb315a84bb4fee4a790013fecfb5a862a7f33" exitCode=0 Oct 02 20:00:52 crc kubenswrapper[4832]: I1002 20:00:52.559970 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p92m4" event={"ID":"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a","Type":"ContainerDied","Data":"9ca4b8f930b341b597b430b862dfb315a84bb4fee4a790013fecfb5a862a7f33"} Oct 02 20:00:52 crc kubenswrapper[4832]: I1002 20:00:52.560197 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p92m4" event={"ID":"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a","Type":"ContainerStarted","Data":"bceda2c6d0970e2261c3cb5fa04f3265b994fa29ff58243b59fd54fbf6c96064"} Oct 02 20:00:52 crc kubenswrapper[4832]: I1002 20:00:52.562833 4832 scope.go:117] "RemoveContainer" containerID="d4a2cba3eb78be671aa970622a76d74efb3e3882d8520ec16e248bc3813e99c3" Oct 02 20:00:52 crc kubenswrapper[4832]: I1002 20:00:52.562993 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/crc-debug-bgc7x" Oct 02 20:00:52 crc kubenswrapper[4832]: I1002 20:00:52.595482 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77_cdf31730-cdc9-4eca-980c-2189c50917b2/util/0.log" Oct 02 20:00:52 crc kubenswrapper[4832]: I1002 20:00:52.818153 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77_cdf31730-cdc9-4eca-980c-2189c50917b2/util/0.log" Oct 02 20:00:52 crc kubenswrapper[4832]: I1002 20:00:52.819578 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77_cdf31730-cdc9-4eca-980c-2189c50917b2/pull/0.log" Oct 02 20:00:52 crc kubenswrapper[4832]: I1002 20:00:52.853814 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77_cdf31730-cdc9-4eca-980c-2189c50917b2/pull/0.log" Oct 02 20:00:53 crc kubenswrapper[4832]: I1002 20:00:53.003992 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77_cdf31730-cdc9-4eca-980c-2189c50917b2/util/0.log" Oct 02 20:00:53 crc kubenswrapper[4832]: I1002 20:00:53.005701 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77_cdf31730-cdc9-4eca-980c-2189c50917b2/extract/0.log" Oct 02 20:00:53 crc kubenswrapper[4832]: I1002 20:00:53.010579 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77_cdf31730-cdc9-4eca-980c-2189c50917b2/pull/0.log" Oct 02 20:00:53 crc kubenswrapper[4832]: I1002 20:00:53.218555 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6c675fb79f-szf5c_2cca84e4-3eb8-41c8-95db-f5b755e83758/kube-rbac-proxy/0.log" Oct 02 20:00:53 crc kubenswrapper[4832]: I1002 20:00:53.238178 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6507acad-7fb1-4699-95c0-1b5f59984640" path="/var/lib/kubelet/pods/6507acad-7fb1-4699-95c0-1b5f59984640/volumes" Oct 02 20:00:53 crc kubenswrapper[4832]: I1002 20:00:53.297185 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6c675fb79f-szf5c_2cca84e4-3eb8-41c8-95db-f5b755e83758/manager/0.log" Oct 02 20:00:53 crc kubenswrapper[4832]: I1002 20:00:53.311082 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79d68d6c85-98l5q_3a910552-db07-45ab-9f11-5b5051a1d070/kube-rbac-proxy/0.log" Oct 02 20:00:53 crc kubenswrapper[4832]: I1002 20:00:53.484140 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79d68d6c85-98l5q_3a910552-db07-45ab-9f11-5b5051a1d070/manager/0.log" Oct 02 20:00:53 crc kubenswrapper[4832]: I1002 20:00:53.513377 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-sqc96_d06661d7-5a41-4954-bfe4-8d25a9aa49d1/manager/0.log" Oct 02 20:00:53 crc kubenswrapper[4832]: I1002 20:00:53.529029 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-sqc96_d06661d7-5a41-4954-bfe4-8d25a9aa49d1/kube-rbac-proxy/0.log" Oct 02 20:00:53 crc kubenswrapper[4832]: I1002 20:00:53.688635 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-846dff85b5-9pnvm_0fa21051-6127-497b-a7dc-f4156314397e/kube-rbac-proxy/0.log" Oct 02 20:00:53 crc kubenswrapper[4832]: I1002 20:00:53.855325 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-846dff85b5-9pnvm_0fa21051-6127-497b-a7dc-f4156314397e/manager/0.log" Oct 02 20:00:53 crc kubenswrapper[4832]: I1002 20:00:53.917026 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-599898f689-rnsmf_7a36740f-eefd-4d9d-afe2-491d02a75fa6/kube-rbac-proxy/0.log" Oct 02 20:00:53 crc kubenswrapper[4832]: I1002 20:00:53.998634 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-599898f689-rnsmf_7a36740f-eefd-4d9d-afe2-491d02a75fa6/manager/0.log" Oct 02 20:00:54 crc kubenswrapper[4832]: I1002 20:00:54.169086 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6769b867d9-zsnms_b66dc2a1-b115-4952-99ce-866046ca9ea5/manager/0.log" Oct 02 20:00:54 crc kubenswrapper[4832]: I1002 20:00:54.181937 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6769b867d9-zsnms_b66dc2a1-b115-4952-99ce-866046ca9ea5/kube-rbac-proxy/0.log" Oct 02 20:00:54 crc kubenswrapper[4832]: I1002 20:00:54.256104 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5fbf469cd7-262c9_0835997a-eef2-4744-a6ed-dce8714f62f7/kube-rbac-proxy/0.log" Oct 02 20:00:54 crc kubenswrapper[4832]: I1002 20:00:54.525788 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-84bc9db6cc-xf6p9_93650652-02f0-403d-a9e6-6a71feb797c6/kube-rbac-proxy/0.log" Oct 02 20:00:54 crc kubenswrapper[4832]: I1002 20:00:54.561679 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-84bc9db6cc-xf6p9_93650652-02f0-403d-a9e6-6a71feb797c6/manager/0.log" Oct 02 20:00:54 crc kubenswrapper[4832]: I1002 20:00:54.589495 4832 generic.go:334] "Generic (PLEG): container finished" podID="f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" containerID="de2302e30eb2243e330dbe27a17b9a48c64bef3330ec8569d0da3412f9e0ed10" exitCode=0 Oct 02 20:00:54 crc kubenswrapper[4832]: I1002 20:00:54.589547 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p92m4" event={"ID":"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a","Type":"ContainerDied","Data":"de2302e30eb2243e330dbe27a17b9a48c64bef3330ec8569d0da3412f9e0ed10"} Oct 02 20:00:54 crc kubenswrapper[4832]: I1002 20:00:54.623104 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5fbf469cd7-262c9_0835997a-eef2-4744-a6ed-dce8714f62f7/manager/0.log" Oct 02 20:00:54 crc kubenswrapper[4832]: I1002 20:00:54.802485 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7f55849f88-fv5h5_b2e208d4-436d-4e17-b4b3-165b130164c7/kube-rbac-proxy/0.log" Oct 02 20:00:54 crc kubenswrapper[4832]: I1002 20:00:54.809424 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7f55849f88-fv5h5_b2e208d4-436d-4e17-b4b3-165b130164c7/manager/0.log" Oct 02 20:00:54 crc kubenswrapper[4832]: I1002 20:00:54.935784 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6fd6854b49-ts5sn_5ae05766-702f-4f1d-a149-a01663fd2b53/kube-rbac-proxy/0.log" Oct 02 20:00:54 crc kubenswrapper[4832]: I1002 20:00:54.977225 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6fd6854b49-ts5sn_5ae05766-702f-4f1d-a149-a01663fd2b53/manager/0.log" Oct 02 20:00:55 crc kubenswrapper[4832]: I1002 20:00:55.032296 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5c468bf4d4-nb6x7_6049f0ba-16e6-4773-bc16-d26b6e04364e/kube-rbac-proxy/0.log" Oct 02 20:00:55 crc kubenswrapper[4832]: I1002 20:00:55.144159 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5c468bf4d4-nb6x7_6049f0ba-16e6-4773-bc16-d26b6e04364e/manager/0.log" Oct 02 20:00:55 crc kubenswrapper[4832]: I1002 20:00:55.203139 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6574bf987d-79nms_36d178a5-f367-4534-ab1e-54c162ce2961/kube-rbac-proxy/0.log" Oct 02 20:00:55 crc kubenswrapper[4832]: I1002 20:00:55.276873 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6574bf987d-79nms_36d178a5-f367-4534-ab1e-54c162ce2961/manager/0.log" Oct 02 20:00:55 crc kubenswrapper[4832]: I1002 20:00:55.407081 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-555c7456bd-v8gc6_60b0fee3-0856-4087-ad87-0a4847e3613c/kube-rbac-proxy/0.log" Oct 02 20:00:55 crc kubenswrapper[4832]: I1002 20:00:55.465994 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-555c7456bd-v8gc6_60b0fee3-0856-4087-ad87-0a4847e3613c/manager/0.log" Oct 02 20:00:55 crc kubenswrapper[4832]: I1002 20:00:55.587690 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-59d6cfdf45-tsswz_655a4d07-4b1f-420e-b676-8e5094960f64/kube-rbac-proxy/0.log" Oct 02 20:00:55 crc kubenswrapper[4832]: I1002 20:00:55.601469 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p92m4" event={"ID":"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a","Type":"ContainerStarted","Data":"08dc71308f824511b9531071fcd976d40bc7f25472a088b16ecc213f7089647c"} Oct 02 20:00:55 crc kubenswrapper[4832]: I1002 20:00:55.626748 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p92m4" podStartSLOduration=2.201274078 podStartE2EDuration="4.626726239s" podCreationTimestamp="2025-10-02 20:00:51 +0000 UTC" firstStartedPulling="2025-10-02 20:00:52.562066024 +0000 UTC m=+6009.531508896" lastFinishedPulling="2025-10-02 20:00:54.987518185 +0000 UTC m=+6011.956961057" observedRunningTime="2025-10-02 20:00:55.621117715 +0000 UTC m=+6012.590560587" watchObservedRunningTime="2025-10-02 20:00:55.626726239 +0000 UTC m=+6012.596169111" Oct 02 20:00:55 crc kubenswrapper[4832]: I1002 20:00:55.705667 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-59d6cfdf45-tsswz_655a4d07-4b1f-420e-b676-8e5094960f64/manager/0.log" Oct 02 20:00:55 crc kubenswrapper[4832]: I1002 20:00:55.745967 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr_30502d18-201c-4133-b25a-7b1e96ce21cf/kube-rbac-proxy/0.log" Oct 02 20:00:55 crc kubenswrapper[4832]: I1002 20:00:55.848426 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr_30502d18-201c-4133-b25a-7b1e96ce21cf/manager/0.log" Oct 02 20:00:56 crc kubenswrapper[4832]: I1002 20:00:56.013693 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7bffff79d9-sj2rb_47968f19-fabf-423c-9cf6-1d8b57654e3f/kube-rbac-proxy/0.log" Oct 02 20:00:56 crc kubenswrapper[4832]: I1002 20:00:56.191617 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7c58d4ffff-bz6pp_37a13ba7-6567-4720-9a8d-ce1c3420bfb2/kube-rbac-proxy/0.log" Oct 02 20:00:56 crc kubenswrapper[4832]: I1002 20:00:56.340668 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rqvl5_791e5e7f-81c9-4e84-baa4-d1f1f752ed7b/registry-server/0.log" Oct 02 20:00:56 crc kubenswrapper[4832]: I1002 20:00:56.422906 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7c58d4ffff-bz6pp_37a13ba7-6567-4720-9a8d-ce1c3420bfb2/operator/0.log" Oct 02 20:00:56 crc kubenswrapper[4832]: I1002 20:00:56.508680 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-688db7b6c7-p9wql_8b694594-41bd-4e62-a202-951f85430ff6/kube-rbac-proxy/0.log" Oct 02 20:00:56 crc kubenswrapper[4832]: I1002 20:00:56.634315 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-688db7b6c7-p9wql_8b694594-41bd-4e62-a202-951f85430ff6/manager/0.log" Oct 02 20:00:56 crc kubenswrapper[4832]: I1002 20:00:56.692371 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-7d8bb7f44c-vqrd9_e8179b13-12b7-492d-bc86-f5543cfcbfbb/kube-rbac-proxy/0.log" Oct 02 20:00:56 crc kubenswrapper[4832]: I1002 20:00:56.753081 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-7d8bb7f44c-vqrd9_e8179b13-12b7-492d-bc86-f5543cfcbfbb/manager/0.log" Oct 02 20:00:56 crc kubenswrapper[4832]: I1002 20:00:56.874981 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 20:00:56 crc kubenswrapper[4832]: I1002 20:00:56.875041 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 20:00:56 crc kubenswrapper[4832]: I1002 20:00:56.875094 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 20:00:56 crc kubenswrapper[4832]: I1002 20:00:56.876011 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 20:00:56 crc kubenswrapper[4832]: I1002 20:00:56.876074 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" gracePeriod=600 Oct 02 20:00:56 crc kubenswrapper[4832]: I1002 20:00:56.920911 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-59mvq_6ad88169-8b29-4078-90a5-759d2cb18325/operator/0.log" Oct 02 20:00:56 crc kubenswrapper[4832]: E1002 20:00:56.998096 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:00:57 crc kubenswrapper[4832]: I1002 20:00:57.124677 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-97td6_2ac2d023-64bc-4653-a8eb-2dd5ed49313c/kube-rbac-proxy/0.log" Oct 02 20:00:57 crc kubenswrapper[4832]: I1002 20:00:57.173377 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-97td6_2ac2d023-64bc-4653-a8eb-2dd5ed49313c/manager/0.log" Oct 02 20:00:57 crc kubenswrapper[4832]: I1002 20:00:57.218945 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-769bf6645d-wj4tb_e6f36bc2-bb15-47f7-9881-05f35c2c513c/kube-rbac-proxy/0.log" Oct 02 20:00:57 crc kubenswrapper[4832]: I1002 20:00:57.382960 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7bffff79d9-sj2rb_47968f19-fabf-423c-9cf6-1d8b57654e3f/manager/0.log" Oct 02 20:00:57 crc kubenswrapper[4832]: I1002 20:00:57.442491 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-frks4_13334024-dee1-47bd-aebe-22df02b93ea0/kube-rbac-proxy/0.log" Oct 02 20:00:57 crc kubenswrapper[4832]: I1002 20:00:57.489126 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-frks4_13334024-dee1-47bd-aebe-22df02b93ea0/manager/0.log" Oct 02 20:00:57 crc kubenswrapper[4832]: I1002 20:00:57.640765 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" exitCode=0 Oct 02 20:00:57 crc kubenswrapper[4832]: I1002 20:00:57.640820 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32"} Oct 02 20:00:57 crc kubenswrapper[4832]: I1002 20:00:57.640861 4832 scope.go:117] "RemoveContainer" containerID="879114e00327d2a95c39bc82519c53c03988b64ac14d0a277e3ce03b5d041d68" Oct 02 20:00:57 crc kubenswrapper[4832]: I1002 20:00:57.641739 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:00:57 crc kubenswrapper[4832]: E1002 20:00:57.642154 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:00:57 crc kubenswrapper[4832]: I1002 20:00:57.673802 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-fcd7d9895-plmxx_45bbf7cb-04fb-4076-af85-0cecd610a929/manager/0.log" Oct 02 20:00:57 crc kubenswrapper[4832]: I1002 20:00:57.714202 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-fcd7d9895-plmxx_45bbf7cb-04fb-4076-af85-0cecd610a929/kube-rbac-proxy/0.log" Oct 02 20:00:57 crc kubenswrapper[4832]: I1002 20:00:57.741172 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-769bf6645d-wj4tb_e6f36bc2-bb15-47f7-9881-05f35c2c513c/manager/0.log" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.150742 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29323921-bnpxq"] Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.152928 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.170286 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29323921-bnpxq"] Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.287326 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-combined-ca-bundle\") pod \"keystone-cron-29323921-bnpxq\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.287642 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-fernet-keys\") pod \"keystone-cron-29323921-bnpxq\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.287821 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-config-data\") pod \"keystone-cron-29323921-bnpxq\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.288143 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdvcs\" (UniqueName: \"kubernetes.io/projected/7b64de45-584e-449a-9bfb-85d7b5ad5879-kube-api-access-pdvcs\") pod \"keystone-cron-29323921-bnpxq\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.390297 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-combined-ca-bundle\") pod \"keystone-cron-29323921-bnpxq\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.390934 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-fernet-keys\") pod \"keystone-cron-29323921-bnpxq\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.391122 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-config-data\") pod \"keystone-cron-29323921-bnpxq\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.391327 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdvcs\" (UniqueName: \"kubernetes.io/projected/7b64de45-584e-449a-9bfb-85d7b5ad5879-kube-api-access-pdvcs\") pod \"keystone-cron-29323921-bnpxq\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.398953 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-config-data\") pod \"keystone-cron-29323921-bnpxq\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.400193 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-fernet-keys\") pod \"keystone-cron-29323921-bnpxq\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.409933 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-combined-ca-bundle\") pod \"keystone-cron-29323921-bnpxq\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.410983 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdvcs\" (UniqueName: \"kubernetes.io/projected/7b64de45-584e-449a-9bfb-85d7b5ad5879-kube-api-access-pdvcs\") pod \"keystone-cron-29323921-bnpxq\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.476660 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:00 crc kubenswrapper[4832]: I1002 20:01:00.983476 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29323921-bnpxq"] Oct 02 20:01:01 crc kubenswrapper[4832]: I1002 20:01:01.591057 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:01:01 crc kubenswrapper[4832]: I1002 20:01:01.591919 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:01:01 crc kubenswrapper[4832]: I1002 20:01:01.655942 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:01:01 crc kubenswrapper[4832]: I1002 20:01:01.693838 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29323921-bnpxq" event={"ID":"7b64de45-584e-449a-9bfb-85d7b5ad5879","Type":"ContainerStarted","Data":"8f9701e52062af8466ae2fcab390b24eb016ebcbf13ebc5e323b44d561aa1b7c"} Oct 02 20:01:01 crc kubenswrapper[4832]: I1002 20:01:01.693879 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29323921-bnpxq" event={"ID":"7b64de45-584e-449a-9bfb-85d7b5ad5879","Type":"ContainerStarted","Data":"8f7de9e608b66f0ffe66938e1b702fc94b92fc3c73210bda87d570b364ee968a"} Oct 02 20:01:01 crc kubenswrapper[4832]: I1002 20:01:01.718270 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29323921-bnpxq" podStartSLOduration=1.718190403 podStartE2EDuration="1.718190403s" podCreationTimestamp="2025-10-02 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:01:01.709453901 +0000 UTC m=+6018.678896783" watchObservedRunningTime="2025-10-02 20:01:01.718190403 +0000 UTC m=+6018.687633285" Oct 02 20:01:01 crc kubenswrapper[4832]: I1002 20:01:01.764398 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:01:01 crc kubenswrapper[4832]: I1002 20:01:01.902827 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p92m4"] Oct 02 20:01:03 crc kubenswrapper[4832]: I1002 20:01:03.719467 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p92m4" podUID="f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" containerName="registry-server" containerID="cri-o://08dc71308f824511b9531071fcd976d40bc7f25472a088b16ecc213f7089647c" gracePeriod=2 Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.316523 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.404398 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-utilities\") pod \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\" (UID: \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\") " Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.404522 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-catalog-content\") pod \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\" (UID: \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\") " Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.404585 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h66dr\" (UniqueName: \"kubernetes.io/projected/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-kube-api-access-h66dr\") pod \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\" (UID: \"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a\") " Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.405378 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-utilities" (OuterVolumeSpecName: "utilities") pod "f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" (UID: "f052e9de-5d6b-47e4-aac5-b34a4adf6c7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.413654 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-kube-api-access-h66dr" (OuterVolumeSpecName: "kube-api-access-h66dr") pod "f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" (UID: "f052e9de-5d6b-47e4-aac5-b34a4adf6c7a"). InnerVolumeSpecName "kube-api-access-h66dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.424070 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" (UID: "f052e9de-5d6b-47e4-aac5-b34a4adf6c7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.507648 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.507688 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.507701 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h66dr\" (UniqueName: \"kubernetes.io/projected/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a-kube-api-access-h66dr\") on node \"crc\" DevicePath \"\"" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.733891 4832 generic.go:334] "Generic (PLEG): container finished" podID="f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" containerID="08dc71308f824511b9531071fcd976d40bc7f25472a088b16ecc213f7089647c" exitCode=0 Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.733962 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p92m4" event={"ID":"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a","Type":"ContainerDied","Data":"08dc71308f824511b9531071fcd976d40bc7f25472a088b16ecc213f7089647c"} Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.734006 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p92m4" event={"ID":"f052e9de-5d6b-47e4-aac5-b34a4adf6c7a","Type":"ContainerDied","Data":"bceda2c6d0970e2261c3cb5fa04f3265b994fa29ff58243b59fd54fbf6c96064"} Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.734037 4832 scope.go:117] "RemoveContainer" containerID="08dc71308f824511b9531071fcd976d40bc7f25472a088b16ecc213f7089647c" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.734234 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p92m4" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.783582 4832 scope.go:117] "RemoveContainer" containerID="de2302e30eb2243e330dbe27a17b9a48c64bef3330ec8569d0da3412f9e0ed10" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.786865 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p92m4"] Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.808049 4832 scope.go:117] "RemoveContainer" containerID="9ca4b8f930b341b597b430b862dfb315a84bb4fee4a790013fecfb5a862a7f33" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.824280 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p92m4"] Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.894657 4832 scope.go:117] "RemoveContainer" containerID="08dc71308f824511b9531071fcd976d40bc7f25472a088b16ecc213f7089647c" Oct 02 20:01:04 crc kubenswrapper[4832]: E1002 20:01:04.901809 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08dc71308f824511b9531071fcd976d40bc7f25472a088b16ecc213f7089647c\": container with ID starting with 08dc71308f824511b9531071fcd976d40bc7f25472a088b16ecc213f7089647c not found: ID does not exist" containerID="08dc71308f824511b9531071fcd976d40bc7f25472a088b16ecc213f7089647c" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.901867 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08dc71308f824511b9531071fcd976d40bc7f25472a088b16ecc213f7089647c"} err="failed to get container status \"08dc71308f824511b9531071fcd976d40bc7f25472a088b16ecc213f7089647c\": rpc error: code = NotFound desc = could not find container \"08dc71308f824511b9531071fcd976d40bc7f25472a088b16ecc213f7089647c\": container with ID starting with 08dc71308f824511b9531071fcd976d40bc7f25472a088b16ecc213f7089647c not found: ID does not exist" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.901900 4832 scope.go:117] "RemoveContainer" containerID="de2302e30eb2243e330dbe27a17b9a48c64bef3330ec8569d0da3412f9e0ed10" Oct 02 20:01:04 crc kubenswrapper[4832]: E1002 20:01:04.902201 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de2302e30eb2243e330dbe27a17b9a48c64bef3330ec8569d0da3412f9e0ed10\": container with ID starting with de2302e30eb2243e330dbe27a17b9a48c64bef3330ec8569d0da3412f9e0ed10 not found: ID does not exist" containerID="de2302e30eb2243e330dbe27a17b9a48c64bef3330ec8569d0da3412f9e0ed10" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.902238 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de2302e30eb2243e330dbe27a17b9a48c64bef3330ec8569d0da3412f9e0ed10"} err="failed to get container status \"de2302e30eb2243e330dbe27a17b9a48c64bef3330ec8569d0da3412f9e0ed10\": rpc error: code = NotFound desc = could not find container \"de2302e30eb2243e330dbe27a17b9a48c64bef3330ec8569d0da3412f9e0ed10\": container with ID starting with de2302e30eb2243e330dbe27a17b9a48c64bef3330ec8569d0da3412f9e0ed10 not found: ID does not exist" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.902256 4832 scope.go:117] "RemoveContainer" containerID="9ca4b8f930b341b597b430b862dfb315a84bb4fee4a790013fecfb5a862a7f33" Oct 02 20:01:04 crc kubenswrapper[4832]: E1002 20:01:04.902652 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ca4b8f930b341b597b430b862dfb315a84bb4fee4a790013fecfb5a862a7f33\": container with ID starting with 9ca4b8f930b341b597b430b862dfb315a84bb4fee4a790013fecfb5a862a7f33 not found: ID does not exist" containerID="9ca4b8f930b341b597b430b862dfb315a84bb4fee4a790013fecfb5a862a7f33" Oct 02 20:01:04 crc kubenswrapper[4832]: I1002 20:01:04.902681 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ca4b8f930b341b597b430b862dfb315a84bb4fee4a790013fecfb5a862a7f33"} err="failed to get container status \"9ca4b8f930b341b597b430b862dfb315a84bb4fee4a790013fecfb5a862a7f33\": rpc error: code = NotFound desc = could not find container \"9ca4b8f930b341b597b430b862dfb315a84bb4fee4a790013fecfb5a862a7f33\": container with ID starting with 9ca4b8f930b341b597b430b862dfb315a84bb4fee4a790013fecfb5a862a7f33 not found: ID does not exist" Oct 02 20:01:05 crc kubenswrapper[4832]: I1002 20:01:05.255736 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" path="/var/lib/kubelet/pods/f052e9de-5d6b-47e4-aac5-b34a4adf6c7a/volumes" Oct 02 20:01:06 crc kubenswrapper[4832]: I1002 20:01:06.759646 4832 generic.go:334] "Generic (PLEG): container finished" podID="7b64de45-584e-449a-9bfb-85d7b5ad5879" containerID="8f9701e52062af8466ae2fcab390b24eb016ebcbf13ebc5e323b44d561aa1b7c" exitCode=0 Oct 02 20:01:06 crc kubenswrapper[4832]: I1002 20:01:06.759732 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29323921-bnpxq" event={"ID":"7b64de45-584e-449a-9bfb-85d7b5ad5879","Type":"ContainerDied","Data":"8f9701e52062af8466ae2fcab390b24eb016ebcbf13ebc5e323b44d561aa1b7c"} Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.185447 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.301752 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-combined-ca-bundle\") pod \"7b64de45-584e-449a-9bfb-85d7b5ad5879\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.301941 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdvcs\" (UniqueName: \"kubernetes.io/projected/7b64de45-584e-449a-9bfb-85d7b5ad5879-kube-api-access-pdvcs\") pod \"7b64de45-584e-449a-9bfb-85d7b5ad5879\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.302078 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-config-data\") pod \"7b64de45-584e-449a-9bfb-85d7b5ad5879\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.302109 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-fernet-keys\") pod \"7b64de45-584e-449a-9bfb-85d7b5ad5879\" (UID: \"7b64de45-584e-449a-9bfb-85d7b5ad5879\") " Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.307350 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b64de45-584e-449a-9bfb-85d7b5ad5879-kube-api-access-pdvcs" (OuterVolumeSpecName: "kube-api-access-pdvcs") pod "7b64de45-584e-449a-9bfb-85d7b5ad5879" (UID: "7b64de45-584e-449a-9bfb-85d7b5ad5879"). InnerVolumeSpecName "kube-api-access-pdvcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.308929 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7b64de45-584e-449a-9bfb-85d7b5ad5879" (UID: "7b64de45-584e-449a-9bfb-85d7b5ad5879"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.340352 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b64de45-584e-449a-9bfb-85d7b5ad5879" (UID: "7b64de45-584e-449a-9bfb-85d7b5ad5879"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.377883 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-config-data" (OuterVolumeSpecName: "config-data") pod "7b64de45-584e-449a-9bfb-85d7b5ad5879" (UID: "7b64de45-584e-449a-9bfb-85d7b5ad5879"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.406638 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdvcs\" (UniqueName: \"kubernetes.io/projected/7b64de45-584e-449a-9bfb-85d7b5ad5879-kube-api-access-pdvcs\") on node \"crc\" DevicePath \"\"" Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.406673 4832 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-config-data\") on node \"crc\" DevicePath \"\"" Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.406683 4832 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.406692 4832 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b64de45-584e-449a-9bfb-85d7b5ad5879-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.783394 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29323921-bnpxq" event={"ID":"7b64de45-584e-449a-9bfb-85d7b5ad5879","Type":"ContainerDied","Data":"8f7de9e608b66f0ffe66938e1b702fc94b92fc3c73210bda87d570b364ee968a"} Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.783502 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29323921-bnpxq" Oct 02 20:01:08 crc kubenswrapper[4832]: I1002 20:01:08.783901 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f7de9e608b66f0ffe66938e1b702fc94b92fc3c73210bda87d570b364ee968a" Oct 02 20:01:12 crc kubenswrapper[4832]: I1002 20:01:12.224036 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:01:12 crc kubenswrapper[4832]: E1002 20:01:12.225042 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:01:15 crc kubenswrapper[4832]: I1002 20:01:15.142038 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-nqc2q_5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce/control-plane-machine-set-operator/0.log" Oct 02 20:01:15 crc kubenswrapper[4832]: I1002 20:01:15.271806 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8fkn8_34d934c3-20c9-4091-844a-e4db7482d8e0/kube-rbac-proxy/0.log" Oct 02 20:01:15 crc kubenswrapper[4832]: I1002 20:01:15.376725 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8fkn8_34d934c3-20c9-4091-844a-e4db7482d8e0/machine-api-operator/0.log" Oct 02 20:01:25 crc kubenswrapper[4832]: I1002 20:01:25.235029 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:01:25 crc kubenswrapper[4832]: E1002 20:01:25.236069 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:01:27 crc kubenswrapper[4832]: I1002 20:01:27.192903 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-mm5th_e3727292-356b-4969-bcb2-c57587cbf4a4/cert-manager-controller/0.log" Oct 02 20:01:27 crc kubenswrapper[4832]: I1002 20:01:27.385474 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-gmljf_4fa1e1e0-e670-4a90-9051-76f8448e9a9f/cert-manager-cainjector/0.log" Oct 02 20:01:27 crc kubenswrapper[4832]: I1002 20:01:27.423644 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-ml6fs_c98ef57c-5a9d-4947-a0e6-3658c7f54073/cert-manager-webhook/0.log" Oct 02 20:01:39 crc kubenswrapper[4832]: I1002 20:01:39.806944 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-mklc7_0c339642-1f25-4795-a62a-2db5045984cb/nmstate-console-plugin/0.log" Oct 02 20:01:39 crc kubenswrapper[4832]: I1002 20:01:39.988341 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-g62zg_477c57db-3df8-4587-abf7-ef94e8c4ad69/nmstate-handler/0.log" Oct 02 20:01:40 crc kubenswrapper[4832]: I1002 20:01:40.084941 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-vldj4_679a35a4-780b-431c-bb41-37763bf32d80/kube-rbac-proxy/0.log" Oct 02 20:01:40 crc kubenswrapper[4832]: I1002 20:01:40.115371 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-vldj4_679a35a4-780b-431c-bb41-37763bf32d80/nmstate-metrics/0.log" Oct 02 20:01:40 crc kubenswrapper[4832]: I1002 20:01:40.201637 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-llnvs_c2e56bcb-1ff3-4c1f-8353-b84a573d23b2/nmstate-operator/0.log" Oct 02 20:01:40 crc kubenswrapper[4832]: I1002 20:01:40.223549 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:01:40 crc kubenswrapper[4832]: E1002 20:01:40.223864 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:01:40 crc kubenswrapper[4832]: I1002 20:01:40.344549 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-wp5dm_d6480df8-e541-45f9-b397-d6abe2be00d3/nmstate-webhook/0.log" Oct 02 20:01:52 crc kubenswrapper[4832]: I1002 20:01:52.720585 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69857bc6ff-kl7qp_3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a/manager/0.log" Oct 02 20:01:52 crc kubenswrapper[4832]: I1002 20:01:52.748867 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69857bc6ff-kl7qp_3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a/kube-rbac-proxy/0.log" Oct 02 20:01:53 crc kubenswrapper[4832]: I1002 20:01:53.223702 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:01:53 crc kubenswrapper[4832]: E1002 20:01:53.223991 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:02:05 crc kubenswrapper[4832]: I1002 20:02:05.235863 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:02:05 crc kubenswrapper[4832]: E1002 20:02:05.236812 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:02:06 crc kubenswrapper[4832]: I1002 20:02:06.936872 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-8958c8b87-4cq9h_901f1678-8c0e-437e-bbb0-ba98d72c5aed/cluster-logging-operator/0.log" Oct 02 20:02:07 crc kubenswrapper[4832]: I1002 20:02:07.098125 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-jtbs5_07997706-bdc1-4e87-a7cb-9f5e4b85ea9c/collector/0.log" Oct 02 20:02:07 crc kubenswrapper[4832]: I1002 20:02:07.138745 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_a6a49601-8595-459e-b680-391e7b597054/loki-compactor/0.log" Oct 02 20:02:07 crc kubenswrapper[4832]: I1002 20:02:07.322949 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-6f5f7fff97-x5gwf_0ac07716-7573-4264-9530-b6dd1ea4ce14/loki-distributor/0.log" Oct 02 20:02:07 crc kubenswrapper[4832]: I1002 20:02:07.381427 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6f7dfcd5dd-w48kx_aa1f7d1a-2f01-4d70-b78c-0b28692ce57c/gateway/0.log" Oct 02 20:02:07 crc kubenswrapper[4832]: I1002 20:02:07.441361 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6f7dfcd5dd-w48kx_aa1f7d1a-2f01-4d70-b78c-0b28692ce57c/opa/0.log" Oct 02 20:02:07 crc kubenswrapper[4832]: I1002 20:02:07.573018 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6f7dfcd5dd-w4hsc_242ef9e3-c339-468a-b6a7-298dfab16a59/gateway/0.log" Oct 02 20:02:07 crc kubenswrapper[4832]: I1002 20:02:07.618327 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6f7dfcd5dd-w4hsc_242ef9e3-c339-468a-b6a7-298dfab16a59/opa/0.log" Oct 02 20:02:08 crc kubenswrapper[4832]: I1002 20:02:08.001508 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_9c643836-b5c3-48dc-8b08-f8a5bcbea2c7/loki-index-gateway/0.log" Oct 02 20:02:08 crc kubenswrapper[4832]: I1002 20:02:08.084096 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_757a6407-20f2-4b69-816a-6b01c7e5cc79/loki-ingester/0.log" Oct 02 20:02:08 crc kubenswrapper[4832]: I1002 20:02:08.187514 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5d954896cf-g55np_64a6d330-84e2-4071-9345-a5dd8496940a/loki-querier/0.log" Oct 02 20:02:08 crc kubenswrapper[4832]: I1002 20:02:08.199273 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6fbbbc8b7d-kqxp2_45f6d92a-3f93-4a09-8ed0-74ad13440476/loki-query-frontend/0.log" Oct 02 20:02:17 crc kubenswrapper[4832]: I1002 20:02:17.223596 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:02:17 crc kubenswrapper[4832]: E1002 20:02:17.225459 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:02:22 crc kubenswrapper[4832]: I1002 20:02:22.604049 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-xl8tj_80714958-8954-4014-97af-c480df6a6981/kube-rbac-proxy/0.log" Oct 02 20:02:22 crc kubenswrapper[4832]: I1002 20:02:22.684736 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-xl8tj_80714958-8954-4014-97af-c480df6a6981/controller/0.log" Oct 02 20:02:22 crc kubenswrapper[4832]: I1002 20:02:22.789293 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-frr-files/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.007025 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-reloader/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.007143 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-metrics/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.008769 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-reloader/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.051545 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-frr-files/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.252184 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-frr-files/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.270233 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-reloader/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.279949 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-metrics/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.294566 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-metrics/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.493539 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-frr-files/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.503661 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-reloader/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.508757 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/controller/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.530366 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-metrics/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.677616 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/frr-metrics/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.697869 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/kube-rbac-proxy/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.735669 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/kube-rbac-proxy-frr/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.934064 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/reloader/0.log" Oct 02 20:02:23 crc kubenswrapper[4832]: I1002 20:02:23.996414 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-6zhbp_ce300e6d-f2b9-47e7-a85e-6a9543a69711/frr-k8s-webhook-server/0.log" Oct 02 20:02:24 crc kubenswrapper[4832]: I1002 20:02:24.281467 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7d88c76f5f-jpcb4_964a5285-636b-4f3a-ab7d-226ff204c8f2/manager/0.log" Oct 02 20:02:24 crc kubenswrapper[4832]: I1002 20:02:24.504521 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7878588579-8s24k_94945b05-e6ed-4bb5-8dde-592b66304f50/webhook-server/0.log" Oct 02 20:02:24 crc kubenswrapper[4832]: I1002 20:02:24.613257 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-99pw7_c297584e-08f6-47c0-8acd-35bd207a9394/kube-rbac-proxy/0.log" Oct 02 20:02:25 crc kubenswrapper[4832]: I1002 20:02:25.246831 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-99pw7_c297584e-08f6-47c0-8acd-35bd207a9394/speaker/0.log" Oct 02 20:02:25 crc kubenswrapper[4832]: I1002 20:02:25.613533 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/frr/0.log" Oct 02 20:02:29 crc kubenswrapper[4832]: I1002 20:02:29.223208 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:02:29 crc kubenswrapper[4832]: E1002 20:02:29.224103 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:02:38 crc kubenswrapper[4832]: I1002 20:02:38.547505 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx_5d0750e6-dd9d-4a96-97ec-97f9857702d9/util/0.log" Oct 02 20:02:38 crc kubenswrapper[4832]: I1002 20:02:38.774538 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx_5d0750e6-dd9d-4a96-97ec-97f9857702d9/util/0.log" Oct 02 20:02:38 crc kubenswrapper[4832]: I1002 20:02:38.817020 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx_5d0750e6-dd9d-4a96-97ec-97f9857702d9/pull/0.log" Oct 02 20:02:38 crc kubenswrapper[4832]: I1002 20:02:38.822705 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx_5d0750e6-dd9d-4a96-97ec-97f9857702d9/pull/0.log" Oct 02 20:02:39 crc kubenswrapper[4832]: I1002 20:02:39.012216 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx_5d0750e6-dd9d-4a96-97ec-97f9857702d9/util/0.log" Oct 02 20:02:39 crc kubenswrapper[4832]: I1002 20:02:39.024794 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx_5d0750e6-dd9d-4a96-97ec-97f9857702d9/extract/0.log" Oct 02 20:02:39 crc kubenswrapper[4832]: I1002 20:02:39.050235 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx_5d0750e6-dd9d-4a96-97ec-97f9857702d9/pull/0.log" Oct 02 20:02:39 crc kubenswrapper[4832]: I1002 20:02:39.188749 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9_1d90d357-6ff7-497d-a6c5-2dbd6af40493/util/0.log" Oct 02 20:02:39 crc kubenswrapper[4832]: I1002 20:02:39.408041 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9_1d90d357-6ff7-497d-a6c5-2dbd6af40493/pull/0.log" Oct 02 20:02:39 crc kubenswrapper[4832]: I1002 20:02:39.413426 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9_1d90d357-6ff7-497d-a6c5-2dbd6af40493/util/0.log" Oct 02 20:02:39 crc kubenswrapper[4832]: I1002 20:02:39.417530 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9_1d90d357-6ff7-497d-a6c5-2dbd6af40493/pull/0.log" Oct 02 20:02:39 crc kubenswrapper[4832]: I1002 20:02:39.591043 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9_1d90d357-6ff7-497d-a6c5-2dbd6af40493/util/0.log" Oct 02 20:02:39 crc kubenswrapper[4832]: I1002 20:02:39.625989 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9_1d90d357-6ff7-497d-a6c5-2dbd6af40493/extract/0.log" Oct 02 20:02:39 crc kubenswrapper[4832]: I1002 20:02:39.639393 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9_1d90d357-6ff7-497d-a6c5-2dbd6af40493/pull/0.log" Oct 02 20:02:39 crc kubenswrapper[4832]: I1002 20:02:39.757833 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf_7c5c5779-5cc4-48b6-92ad-5c2e2248804d/util/0.log" Oct 02 20:02:39 crc kubenswrapper[4832]: I1002 20:02:39.917653 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf_7c5c5779-5cc4-48b6-92ad-5c2e2248804d/pull/0.log" Oct 02 20:02:39 crc kubenswrapper[4832]: I1002 20:02:39.952697 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf_7c5c5779-5cc4-48b6-92ad-5c2e2248804d/util/0.log" Oct 02 20:02:39 crc kubenswrapper[4832]: I1002 20:02:39.977221 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf_7c5c5779-5cc4-48b6-92ad-5c2e2248804d/pull/0.log" Oct 02 20:02:40 crc kubenswrapper[4832]: I1002 20:02:40.145431 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf_7c5c5779-5cc4-48b6-92ad-5c2e2248804d/extract/0.log" Oct 02 20:02:40 crc kubenswrapper[4832]: I1002 20:02:40.153242 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf_7c5c5779-5cc4-48b6-92ad-5c2e2248804d/util/0.log" Oct 02 20:02:40 crc kubenswrapper[4832]: I1002 20:02:40.156615 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf_7c5c5779-5cc4-48b6-92ad-5c2e2248804d/pull/0.log" Oct 02 20:02:40 crc kubenswrapper[4832]: I1002 20:02:40.333805 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2_dbdaf694-0aaf-4fd9-9e6d-01a3d8581364/util/0.log" Oct 02 20:02:40 crc kubenswrapper[4832]: I1002 20:02:40.508865 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2_dbdaf694-0aaf-4fd9-9e6d-01a3d8581364/pull/0.log" Oct 02 20:02:40 crc kubenswrapper[4832]: I1002 20:02:40.552584 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2_dbdaf694-0aaf-4fd9-9e6d-01a3d8581364/util/0.log" Oct 02 20:02:40 crc kubenswrapper[4832]: I1002 20:02:40.579518 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2_dbdaf694-0aaf-4fd9-9e6d-01a3d8581364/pull/0.log" Oct 02 20:02:40 crc kubenswrapper[4832]: I1002 20:02:40.708745 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2_dbdaf694-0aaf-4fd9-9e6d-01a3d8581364/util/0.log" Oct 02 20:02:40 crc kubenswrapper[4832]: I1002 20:02:40.721808 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2_dbdaf694-0aaf-4fd9-9e6d-01a3d8581364/extract/0.log" Oct 02 20:02:40 crc kubenswrapper[4832]: I1002 20:02:40.768073 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2_dbdaf694-0aaf-4fd9-9e6d-01a3d8581364/pull/0.log" Oct 02 20:02:40 crc kubenswrapper[4832]: I1002 20:02:40.901280 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whm2b_0467cc9b-9752-4c8c-bde0-660d88dabfb9/extract-utilities/0.log" Oct 02 20:02:41 crc kubenswrapper[4832]: I1002 20:02:41.084369 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whm2b_0467cc9b-9752-4c8c-bde0-660d88dabfb9/extract-utilities/0.log" Oct 02 20:02:41 crc kubenswrapper[4832]: I1002 20:02:41.095051 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whm2b_0467cc9b-9752-4c8c-bde0-660d88dabfb9/extract-content/0.log" Oct 02 20:02:41 crc kubenswrapper[4832]: I1002 20:02:41.111624 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whm2b_0467cc9b-9752-4c8c-bde0-660d88dabfb9/extract-content/0.log" Oct 02 20:02:41 crc kubenswrapper[4832]: I1002 20:02:41.301965 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whm2b_0467cc9b-9752-4c8c-bde0-660d88dabfb9/extract-utilities/0.log" Oct 02 20:02:41 crc kubenswrapper[4832]: I1002 20:02:41.328312 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whm2b_0467cc9b-9752-4c8c-bde0-660d88dabfb9/extract-content/0.log" Oct 02 20:02:41 crc kubenswrapper[4832]: I1002 20:02:41.541551 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqqbj_e116f154-c5cb-480d-b397-1cd848496e21/extract-utilities/0.log" Oct 02 20:02:41 crc kubenswrapper[4832]: I1002 20:02:41.777512 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqqbj_e116f154-c5cb-480d-b397-1cd848496e21/extract-content/0.log" Oct 02 20:02:41 crc kubenswrapper[4832]: I1002 20:02:41.801322 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqqbj_e116f154-c5cb-480d-b397-1cd848496e21/extract-utilities/0.log" Oct 02 20:02:41 crc kubenswrapper[4832]: I1002 20:02:41.826584 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqqbj_e116f154-c5cb-480d-b397-1cd848496e21/extract-content/0.log" Oct 02 20:02:42 crc kubenswrapper[4832]: I1002 20:02:42.085122 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqqbj_e116f154-c5cb-480d-b397-1cd848496e21/extract-content/0.log" Oct 02 20:02:42 crc kubenswrapper[4832]: I1002 20:02:42.117430 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqqbj_e116f154-c5cb-480d-b397-1cd848496e21/extract-utilities/0.log" Oct 02 20:02:42 crc kubenswrapper[4832]: I1002 20:02:42.157388 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whm2b_0467cc9b-9752-4c8c-bde0-660d88dabfb9/registry-server/0.log" Oct 02 20:02:42 crc kubenswrapper[4832]: I1002 20:02:42.332920 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh_57268deb-95d5-4987-ab26-52f11e9182b4/util/0.log" Oct 02 20:02:42 crc kubenswrapper[4832]: I1002 20:02:42.400027 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqqbj_e116f154-c5cb-480d-b397-1cd848496e21/registry-server/0.log" Oct 02 20:02:42 crc kubenswrapper[4832]: I1002 20:02:42.560880 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh_57268deb-95d5-4987-ab26-52f11e9182b4/pull/0.log" Oct 02 20:02:42 crc kubenswrapper[4832]: I1002 20:02:42.573965 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh_57268deb-95d5-4987-ab26-52f11e9182b4/pull/0.log" Oct 02 20:02:42 crc kubenswrapper[4832]: I1002 20:02:42.581991 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh_57268deb-95d5-4987-ab26-52f11e9182b4/util/0.log" Oct 02 20:02:42 crc kubenswrapper[4832]: I1002 20:02:42.738724 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh_57268deb-95d5-4987-ab26-52f11e9182b4/util/0.log" Oct 02 20:02:42 crc kubenswrapper[4832]: I1002 20:02:42.739614 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh_57268deb-95d5-4987-ab26-52f11e9182b4/extract/0.log" Oct 02 20:02:42 crc kubenswrapper[4832]: I1002 20:02:42.768441 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh_57268deb-95d5-4987-ab26-52f11e9182b4/pull/0.log" Oct 02 20:02:42 crc kubenswrapper[4832]: I1002 20:02:42.824790 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vpnm2_4deba2ec-10ea-48dd-b732-a924f01ab1b7/marketplace-operator/0.log" Oct 02 20:02:42 crc kubenswrapper[4832]: I1002 20:02:42.946141 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5jp2v_0d465b37-f6e4-48a2-bda2-efc7d3601131/extract-utilities/0.log" Oct 02 20:02:43 crc kubenswrapper[4832]: I1002 20:02:43.095335 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5jp2v_0d465b37-f6e4-48a2-bda2-efc7d3601131/extract-utilities/0.log" Oct 02 20:02:43 crc kubenswrapper[4832]: I1002 20:02:43.095420 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5jp2v_0d465b37-f6e4-48a2-bda2-efc7d3601131/extract-content/0.log" Oct 02 20:02:43 crc kubenswrapper[4832]: I1002 20:02:43.113071 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5jp2v_0d465b37-f6e4-48a2-bda2-efc7d3601131/extract-content/0.log" Oct 02 20:02:43 crc kubenswrapper[4832]: I1002 20:02:43.258191 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5jp2v_0d465b37-f6e4-48a2-bda2-efc7d3601131/extract-content/0.log" Oct 02 20:02:43 crc kubenswrapper[4832]: I1002 20:02:43.264793 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5jp2v_0d465b37-f6e4-48a2-bda2-efc7d3601131/extract-utilities/0.log" Oct 02 20:02:43 crc kubenswrapper[4832]: I1002 20:02:43.334453 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6rnh_4bbe1430-3664-4a03-97a8-5302998288ca/extract-utilities/0.log" Oct 02 20:02:43 crc kubenswrapper[4832]: I1002 20:02:43.518946 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6rnh_4bbe1430-3664-4a03-97a8-5302998288ca/extract-content/0.log" Oct 02 20:02:43 crc kubenswrapper[4832]: I1002 20:02:43.529434 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6rnh_4bbe1430-3664-4a03-97a8-5302998288ca/extract-utilities/0.log" Oct 02 20:02:43 crc kubenswrapper[4832]: I1002 20:02:43.548861 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5jp2v_0d465b37-f6e4-48a2-bda2-efc7d3601131/registry-server/0.log" Oct 02 20:02:43 crc kubenswrapper[4832]: I1002 20:02:43.555891 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6rnh_4bbe1430-3664-4a03-97a8-5302998288ca/extract-content/0.log" Oct 02 20:02:43 crc kubenswrapper[4832]: I1002 20:02:43.736876 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6rnh_4bbe1430-3664-4a03-97a8-5302998288ca/extract-utilities/0.log" Oct 02 20:02:43 crc kubenswrapper[4832]: I1002 20:02:43.739694 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6rnh_4bbe1430-3664-4a03-97a8-5302998288ca/extract-content/0.log" Oct 02 20:02:44 crc kubenswrapper[4832]: I1002 20:02:44.222744 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:02:44 crc kubenswrapper[4832]: E1002 20:02:44.223077 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:02:44 crc kubenswrapper[4832]: I1002 20:02:44.468659 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6rnh_4bbe1430-3664-4a03-97a8-5302998288ca/registry-server/0.log" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.068323 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-25htx"] Oct 02 20:02:47 crc kubenswrapper[4832]: E1002 20:02:47.069185 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" containerName="registry-server" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.069197 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" containerName="registry-server" Oct 02 20:02:47 crc kubenswrapper[4832]: E1002 20:02:47.069210 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" containerName="extract-utilities" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.069216 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" containerName="extract-utilities" Oct 02 20:02:47 crc kubenswrapper[4832]: E1002 20:02:47.069235 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b64de45-584e-449a-9bfb-85d7b5ad5879" containerName="keystone-cron" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.069241 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b64de45-584e-449a-9bfb-85d7b5ad5879" containerName="keystone-cron" Oct 02 20:02:47 crc kubenswrapper[4832]: E1002 20:02:47.069253 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" containerName="extract-content" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.069285 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" containerName="extract-content" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.069504 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="f052e9de-5d6b-47e4-aac5-b34a4adf6c7a" containerName="registry-server" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.069529 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b64de45-584e-449a-9bfb-85d7b5ad5879" containerName="keystone-cron" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.071067 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.145918 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldv2k\" (UniqueName: \"kubernetes.io/projected/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-kube-api-access-ldv2k\") pod \"community-operators-25htx\" (UID: \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\") " pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.146412 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-utilities\") pod \"community-operators-25htx\" (UID: \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\") " pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.146728 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-catalog-content\") pod \"community-operators-25htx\" (UID: \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\") " pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.148305 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25htx"] Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.249220 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldv2k\" (UniqueName: \"kubernetes.io/projected/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-kube-api-access-ldv2k\") pod \"community-operators-25htx\" (UID: \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\") " pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.249966 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-utilities\") pod \"community-operators-25htx\" (UID: \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\") " pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.250230 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-catalog-content\") pod \"community-operators-25htx\" (UID: \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\") " pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.250555 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-utilities\") pod \"community-operators-25htx\" (UID: \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\") " pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.250769 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-catalog-content\") pod \"community-operators-25htx\" (UID: \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\") " pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.273744 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldv2k\" (UniqueName: \"kubernetes.io/projected/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-kube-api-access-ldv2k\") pod \"community-operators-25htx\" (UID: \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\") " pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:47 crc kubenswrapper[4832]: I1002 20:02:47.441190 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:48 crc kubenswrapper[4832]: I1002 20:02:48.001768 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25htx"] Oct 02 20:02:48 crc kubenswrapper[4832]: I1002 20:02:48.861530 4832 generic.go:334] "Generic (PLEG): container finished" podID="b14b8cea-7ecc-4edc-ac35-2f4b66f85877" containerID="8de4a138b66ef2107ad0ae20a5789ed06280545d7eed694cfbaa7f3202986b51" exitCode=0 Oct 02 20:02:48 crc kubenswrapper[4832]: I1002 20:02:48.861609 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25htx" event={"ID":"b14b8cea-7ecc-4edc-ac35-2f4b66f85877","Type":"ContainerDied","Data":"8de4a138b66ef2107ad0ae20a5789ed06280545d7eed694cfbaa7f3202986b51"} Oct 02 20:02:48 crc kubenswrapper[4832]: I1002 20:02:48.861961 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25htx" event={"ID":"b14b8cea-7ecc-4edc-ac35-2f4b66f85877","Type":"ContainerStarted","Data":"2ed7c397f3d90c3ce4cff41b8f009c76030a17e7c0e1940e478cc31d4cfa760e"} Oct 02 20:02:48 crc kubenswrapper[4832]: I1002 20:02:48.867786 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 20:02:49 crc kubenswrapper[4832]: I1002 20:02:49.880197 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25htx" event={"ID":"b14b8cea-7ecc-4edc-ac35-2f4b66f85877","Type":"ContainerStarted","Data":"8c97649ef5003b8920a181fab4af9b9d35e72301629b6cd4d705a693d942c0da"} Oct 02 20:02:51 crc kubenswrapper[4832]: I1002 20:02:51.902305 4832 generic.go:334] "Generic (PLEG): container finished" podID="b14b8cea-7ecc-4edc-ac35-2f4b66f85877" containerID="8c97649ef5003b8920a181fab4af9b9d35e72301629b6cd4d705a693d942c0da" exitCode=0 Oct 02 20:02:51 crc kubenswrapper[4832]: I1002 20:02:51.902680 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25htx" event={"ID":"b14b8cea-7ecc-4edc-ac35-2f4b66f85877","Type":"ContainerDied","Data":"8c97649ef5003b8920a181fab4af9b9d35e72301629b6cd4d705a693d942c0da"} Oct 02 20:02:52 crc kubenswrapper[4832]: I1002 20:02:52.921830 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25htx" event={"ID":"b14b8cea-7ecc-4edc-ac35-2f4b66f85877","Type":"ContainerStarted","Data":"d33f699902f3ec6350e2a0840310b3b11f22a4b2c3a46f1b48eabf8926654528"} Oct 02 20:02:52 crc kubenswrapper[4832]: I1002 20:02:52.944700 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-25htx" podStartSLOduration=2.391981452 podStartE2EDuration="5.944677848s" podCreationTimestamp="2025-10-02 20:02:47 +0000 UTC" firstStartedPulling="2025-10-02 20:02:48.864356801 +0000 UTC m=+6125.833799673" lastFinishedPulling="2025-10-02 20:02:52.417053197 +0000 UTC m=+6129.386496069" observedRunningTime="2025-10-02 20:02:52.940694524 +0000 UTC m=+6129.910137396" watchObservedRunningTime="2025-10-02 20:02:52.944677848 +0000 UTC m=+6129.914120720" Oct 02 20:02:56 crc kubenswrapper[4832]: I1002 20:02:56.362188 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-7c8cf85677-b4kkp_38be72b3-2875-4e11-895a-d7b229709e75/prometheus-operator/0.log" Oct 02 20:02:56 crc kubenswrapper[4832]: I1002 20:02:56.531591 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c/prometheus-operator-admission-webhook/0.log" Oct 02 20:02:56 crc kubenswrapper[4832]: I1002 20:02:56.532516 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_cf1d6e7f-c76f-4888-8465-3651cdd3c079/prometheus-operator-admission-webhook/0.log" Oct 02 20:02:56 crc kubenswrapper[4832]: I1002 20:02:56.718893 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-cc5f78dfc-sgrwh_9e59cc84-d625-4121-956d-773c5be0d917/operator/0.log" Oct 02 20:02:56 crc kubenswrapper[4832]: I1002 20:02:56.730115 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-6584dc9448-6ftdp_28fbc8db-b613-4de9-a177-3f7c5be4d857/observability-ui-dashboards/0.log" Oct 02 20:02:56 crc kubenswrapper[4832]: I1002 20:02:56.899726 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-54bc95c9fb-wczsw_72b2ae10-6b68-4738-9474-41a9fa1f9f92/perses-operator/0.log" Oct 02 20:02:57 crc kubenswrapper[4832]: I1002 20:02:57.441914 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:57 crc kubenswrapper[4832]: I1002 20:02:57.442234 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:57 crc kubenswrapper[4832]: I1002 20:02:57.499475 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:58 crc kubenswrapper[4832]: I1002 20:02:58.035431 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-25htx" Oct 02 20:02:58 crc kubenswrapper[4832]: I1002 20:02:58.223942 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:02:58 crc kubenswrapper[4832]: E1002 20:02:58.224285 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:03:00 crc kubenswrapper[4832]: I1002 20:03:00.660441 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25htx"] Oct 02 20:03:00 crc kubenswrapper[4832]: I1002 20:03:00.661235 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-25htx" podUID="b14b8cea-7ecc-4edc-ac35-2f4b66f85877" containerName="registry-server" containerID="cri-o://d33f699902f3ec6350e2a0840310b3b11f22a4b2c3a46f1b48eabf8926654528" gracePeriod=2 Oct 02 20:03:01 crc kubenswrapper[4832]: I1002 20:03:01.049457 4832 generic.go:334] "Generic (PLEG): container finished" podID="b14b8cea-7ecc-4edc-ac35-2f4b66f85877" containerID="d33f699902f3ec6350e2a0840310b3b11f22a4b2c3a46f1b48eabf8926654528" exitCode=0 Oct 02 20:03:01 crc kubenswrapper[4832]: I1002 20:03:01.049802 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25htx" event={"ID":"b14b8cea-7ecc-4edc-ac35-2f4b66f85877","Type":"ContainerDied","Data":"d33f699902f3ec6350e2a0840310b3b11f22a4b2c3a46f1b48eabf8926654528"} Oct 02 20:03:01 crc kubenswrapper[4832]: I1002 20:03:01.201562 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25htx" Oct 02 20:03:01 crc kubenswrapper[4832]: I1002 20:03:01.309849 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-utilities\") pod \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\" (UID: \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\") " Oct 02 20:03:01 crc kubenswrapper[4832]: I1002 20:03:01.310048 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-catalog-content\") pod \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\" (UID: \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\") " Oct 02 20:03:01 crc kubenswrapper[4832]: I1002 20:03:01.310148 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldv2k\" (UniqueName: \"kubernetes.io/projected/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-kube-api-access-ldv2k\") pod \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\" (UID: \"b14b8cea-7ecc-4edc-ac35-2f4b66f85877\") " Oct 02 20:03:01 crc kubenswrapper[4832]: I1002 20:03:01.310739 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-utilities" (OuterVolumeSpecName: "utilities") pod "b14b8cea-7ecc-4edc-ac35-2f4b66f85877" (UID: "b14b8cea-7ecc-4edc-ac35-2f4b66f85877"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 20:03:01 crc kubenswrapper[4832]: I1002 20:03:01.320542 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-kube-api-access-ldv2k" (OuterVolumeSpecName: "kube-api-access-ldv2k") pod "b14b8cea-7ecc-4edc-ac35-2f4b66f85877" (UID: "b14b8cea-7ecc-4edc-ac35-2f4b66f85877"). InnerVolumeSpecName "kube-api-access-ldv2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:03:01 crc kubenswrapper[4832]: I1002 20:03:01.359316 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b14b8cea-7ecc-4edc-ac35-2f4b66f85877" (UID: "b14b8cea-7ecc-4edc-ac35-2f4b66f85877"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 20:03:01 crc kubenswrapper[4832]: I1002 20:03:01.412501 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 20:03:01 crc kubenswrapper[4832]: I1002 20:03:01.412533 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 20:03:01 crc kubenswrapper[4832]: I1002 20:03:01.412544 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldv2k\" (UniqueName: \"kubernetes.io/projected/b14b8cea-7ecc-4edc-ac35-2f4b66f85877-kube-api-access-ldv2k\") on node \"crc\" DevicePath \"\"" Oct 02 20:03:02 crc kubenswrapper[4832]: I1002 20:03:02.064766 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25htx" event={"ID":"b14b8cea-7ecc-4edc-ac35-2f4b66f85877","Type":"ContainerDied","Data":"2ed7c397f3d90c3ce4cff41b8f009c76030a17e7c0e1940e478cc31d4cfa760e"} Oct 02 20:03:02 crc kubenswrapper[4832]: I1002 20:03:02.065201 4832 scope.go:117] "RemoveContainer" containerID="d33f699902f3ec6350e2a0840310b3b11f22a4b2c3a46f1b48eabf8926654528" Oct 02 20:03:02 crc kubenswrapper[4832]: I1002 20:03:02.064860 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25htx" Oct 02 20:03:02 crc kubenswrapper[4832]: I1002 20:03:02.117562 4832 scope.go:117] "RemoveContainer" containerID="8c97649ef5003b8920a181fab4af9b9d35e72301629b6cd4d705a693d942c0da" Oct 02 20:03:02 crc kubenswrapper[4832]: I1002 20:03:02.122888 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25htx"] Oct 02 20:03:02 crc kubenswrapper[4832]: I1002 20:03:02.137007 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-25htx"] Oct 02 20:03:02 crc kubenswrapper[4832]: I1002 20:03:02.142842 4832 scope.go:117] "RemoveContainer" containerID="8de4a138b66ef2107ad0ae20a5789ed06280545d7eed694cfbaa7f3202986b51" Oct 02 20:03:03 crc kubenswrapper[4832]: I1002 20:03:03.237480 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b14b8cea-7ecc-4edc-ac35-2f4b66f85877" path="/var/lib/kubelet/pods/b14b8cea-7ecc-4edc-ac35-2f4b66f85877/volumes" Oct 02 20:03:09 crc kubenswrapper[4832]: I1002 20:03:09.678338 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69857bc6ff-kl7qp_3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a/kube-rbac-proxy/0.log" Oct 02 20:03:09 crc kubenswrapper[4832]: I1002 20:03:09.744374 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69857bc6ff-kl7qp_3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a/manager/0.log" Oct 02 20:03:11 crc kubenswrapper[4832]: I1002 20:03:11.223187 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:03:11 crc kubenswrapper[4832]: E1002 20:03:11.223839 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:03:26 crc kubenswrapper[4832]: I1002 20:03:26.223711 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:03:26 crc kubenswrapper[4832]: E1002 20:03:26.224494 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:03:40 crc kubenswrapper[4832]: I1002 20:03:40.223377 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:03:40 crc kubenswrapper[4832]: E1002 20:03:40.224392 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:03:51 crc kubenswrapper[4832]: I1002 20:03:51.226038 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:03:51 crc kubenswrapper[4832]: E1002 20:03:51.226877 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:04:02 crc kubenswrapper[4832]: I1002 20:04:02.223510 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:04:02 crc kubenswrapper[4832]: E1002 20:04:02.226163 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:04:14 crc kubenswrapper[4832]: I1002 20:04:14.227745 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:04:14 crc kubenswrapper[4832]: E1002 20:04:14.228624 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:04:24 crc kubenswrapper[4832]: I1002 20:04:24.544114 4832 scope.go:117] "RemoveContainer" containerID="418e96607960afd757cb1a3f2d08700ea79f6695fc18ff4ffd089890f31cc523" Oct 02 20:04:28 crc kubenswrapper[4832]: I1002 20:04:28.223092 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:04:28 crc kubenswrapper[4832]: E1002 20:04:28.223949 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:04:41 crc kubenswrapper[4832]: I1002 20:04:41.223187 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:04:41 crc kubenswrapper[4832]: E1002 20:04:41.224870 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:04:54 crc kubenswrapper[4832]: I1002 20:04:54.223257 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:04:54 crc kubenswrapper[4832]: E1002 20:04:54.224038 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:05:07 crc kubenswrapper[4832]: I1002 20:05:07.223335 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:05:07 crc kubenswrapper[4832]: E1002 20:05:07.224355 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:05:21 crc kubenswrapper[4832]: I1002 20:05:21.224530 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:05:21 crc kubenswrapper[4832]: E1002 20:05:21.225397 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:05:23 crc kubenswrapper[4832]: I1002 20:05:23.772799 4832 generic.go:334] "Generic (PLEG): container finished" podID="baf4098c-0d95-47f2-83fb-1ff3a4124869" containerID="d719bd1ab1fdc5208be35aaee504e0d07dcec6c10d28217035821fabbd8d8889" exitCode=0 Oct 02 20:05:23 crc kubenswrapper[4832]: I1002 20:05:23.772921 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6gqdd/must-gather-rdkxb" event={"ID":"baf4098c-0d95-47f2-83fb-1ff3a4124869","Type":"ContainerDied","Data":"d719bd1ab1fdc5208be35aaee504e0d07dcec6c10d28217035821fabbd8d8889"} Oct 02 20:05:23 crc kubenswrapper[4832]: I1002 20:05:23.774134 4832 scope.go:117] "RemoveContainer" containerID="d719bd1ab1fdc5208be35aaee504e0d07dcec6c10d28217035821fabbd8d8889" Oct 02 20:05:24 crc kubenswrapper[4832]: I1002 20:05:24.693989 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6gqdd_must-gather-rdkxb_baf4098c-0d95-47f2-83fb-1ff3a4124869/gather/0.log" Oct 02 20:05:33 crc kubenswrapper[4832]: I1002 20:05:33.626197 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6gqdd/must-gather-rdkxb"] Oct 02 20:05:33 crc kubenswrapper[4832]: I1002 20:05:33.627409 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6gqdd/must-gather-rdkxb" podUID="baf4098c-0d95-47f2-83fb-1ff3a4124869" containerName="copy" containerID="cri-o://21e557d4c16f4b101ab3384abe079f5e83c30da758a8f8a82764ce4e369b6bc8" gracePeriod=2 Oct 02 20:05:33 crc kubenswrapper[4832]: I1002 20:05:33.640135 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6gqdd/must-gather-rdkxb"] Oct 02 20:05:33 crc kubenswrapper[4832]: I1002 20:05:33.906662 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6gqdd_must-gather-rdkxb_baf4098c-0d95-47f2-83fb-1ff3a4124869/copy/0.log" Oct 02 20:05:33 crc kubenswrapper[4832]: I1002 20:05:33.907118 4832 generic.go:334] "Generic (PLEG): container finished" podID="baf4098c-0d95-47f2-83fb-1ff3a4124869" containerID="21e557d4c16f4b101ab3384abe079f5e83c30da758a8f8a82764ce4e369b6bc8" exitCode=143 Oct 02 20:05:34 crc kubenswrapper[4832]: I1002 20:05:34.150128 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6gqdd_must-gather-rdkxb_baf4098c-0d95-47f2-83fb-1ff3a4124869/copy/0.log" Oct 02 20:05:34 crc kubenswrapper[4832]: I1002 20:05:34.151029 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/must-gather-rdkxb" Oct 02 20:05:34 crc kubenswrapper[4832]: I1002 20:05:34.179654 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5vmk\" (UniqueName: \"kubernetes.io/projected/baf4098c-0d95-47f2-83fb-1ff3a4124869-kube-api-access-v5vmk\") pod \"baf4098c-0d95-47f2-83fb-1ff3a4124869\" (UID: \"baf4098c-0d95-47f2-83fb-1ff3a4124869\") " Oct 02 20:05:34 crc kubenswrapper[4832]: I1002 20:05:34.189798 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baf4098c-0d95-47f2-83fb-1ff3a4124869-kube-api-access-v5vmk" (OuterVolumeSpecName: "kube-api-access-v5vmk") pod "baf4098c-0d95-47f2-83fb-1ff3a4124869" (UID: "baf4098c-0d95-47f2-83fb-1ff3a4124869"). InnerVolumeSpecName "kube-api-access-v5vmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:05:34 crc kubenswrapper[4832]: I1002 20:05:34.223723 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:05:34 crc kubenswrapper[4832]: E1002 20:05:34.224278 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:05:34 crc kubenswrapper[4832]: I1002 20:05:34.282868 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/baf4098c-0d95-47f2-83fb-1ff3a4124869-must-gather-output\") pod \"baf4098c-0d95-47f2-83fb-1ff3a4124869\" (UID: \"baf4098c-0d95-47f2-83fb-1ff3a4124869\") " Oct 02 20:05:34 crc kubenswrapper[4832]: I1002 20:05:34.285106 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5vmk\" (UniqueName: \"kubernetes.io/projected/baf4098c-0d95-47f2-83fb-1ff3a4124869-kube-api-access-v5vmk\") on node \"crc\" DevicePath \"\"" Oct 02 20:05:34 crc kubenswrapper[4832]: I1002 20:05:34.543420 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baf4098c-0d95-47f2-83fb-1ff3a4124869-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "baf4098c-0d95-47f2-83fb-1ff3a4124869" (UID: "baf4098c-0d95-47f2-83fb-1ff3a4124869"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 20:05:34 crc kubenswrapper[4832]: I1002 20:05:34.594340 4832 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/baf4098c-0d95-47f2-83fb-1ff3a4124869-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 02 20:05:34 crc kubenswrapper[4832]: I1002 20:05:34.919436 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6gqdd_must-gather-rdkxb_baf4098c-0d95-47f2-83fb-1ff3a4124869/copy/0.log" Oct 02 20:05:34 crc kubenswrapper[4832]: I1002 20:05:34.919858 4832 scope.go:117] "RemoveContainer" containerID="21e557d4c16f4b101ab3384abe079f5e83c30da758a8f8a82764ce4e369b6bc8" Oct 02 20:05:34 crc kubenswrapper[4832]: I1002 20:05:34.919884 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6gqdd/must-gather-rdkxb" Oct 02 20:05:34 crc kubenswrapper[4832]: I1002 20:05:34.945874 4832 scope.go:117] "RemoveContainer" containerID="d719bd1ab1fdc5208be35aaee504e0d07dcec6c10d28217035821fabbd8d8889" Oct 02 20:05:35 crc kubenswrapper[4832]: I1002 20:05:35.236673 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baf4098c-0d95-47f2-83fb-1ff3a4124869" path="/var/lib/kubelet/pods/baf4098c-0d95-47f2-83fb-1ff3a4124869/volumes" Oct 02 20:05:46 crc kubenswrapper[4832]: I1002 20:05:46.223786 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:05:46 crc kubenswrapper[4832]: E1002 20:05:46.224604 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:05:58 crc kubenswrapper[4832]: I1002 20:05:58.223652 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:05:59 crc kubenswrapper[4832]: I1002 20:05:59.209697 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"7ef522b7b58d76f3efc9401215fda40f442def9ce3c07d5450ebdc9abcdb576e"} Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.229484 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-txx97/must-gather-9z9r4"] Oct 02 20:06:12 crc kubenswrapper[4832]: E1002 20:06:12.230515 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14b8cea-7ecc-4edc-ac35-2f4b66f85877" containerName="registry-server" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.230529 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14b8cea-7ecc-4edc-ac35-2f4b66f85877" containerName="registry-server" Oct 02 20:06:12 crc kubenswrapper[4832]: E1002 20:06:12.230548 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf4098c-0d95-47f2-83fb-1ff3a4124869" containerName="gather" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.230556 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf4098c-0d95-47f2-83fb-1ff3a4124869" containerName="gather" Oct 02 20:06:12 crc kubenswrapper[4832]: E1002 20:06:12.230571 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14b8cea-7ecc-4edc-ac35-2f4b66f85877" containerName="extract-content" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.230576 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14b8cea-7ecc-4edc-ac35-2f4b66f85877" containerName="extract-content" Oct 02 20:06:12 crc kubenswrapper[4832]: E1002 20:06:12.230594 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf4098c-0d95-47f2-83fb-1ff3a4124869" containerName="copy" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.230599 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf4098c-0d95-47f2-83fb-1ff3a4124869" containerName="copy" Oct 02 20:06:12 crc kubenswrapper[4832]: E1002 20:06:12.230615 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14b8cea-7ecc-4edc-ac35-2f4b66f85877" containerName="extract-utilities" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.230622 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14b8cea-7ecc-4edc-ac35-2f4b66f85877" containerName="extract-utilities" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.230850 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf4098c-0d95-47f2-83fb-1ff3a4124869" containerName="gather" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.230877 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14b8cea-7ecc-4edc-ac35-2f4b66f85877" containerName="registry-server" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.230890 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf4098c-0d95-47f2-83fb-1ff3a4124869" containerName="copy" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.232109 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/must-gather-9z9r4" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.237994 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-txx97"/"openshift-service-ca.crt" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.238018 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-txx97"/"kube-root-ca.crt" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.247667 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-txx97/must-gather-9z9r4"] Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.364596 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/35969de4-68f3-4956-bf07-eff642d64df3-must-gather-output\") pod \"must-gather-9z9r4\" (UID: \"35969de4-68f3-4956-bf07-eff642d64df3\") " pod="openshift-must-gather-txx97/must-gather-9z9r4" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.364791 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vvgl\" (UniqueName: \"kubernetes.io/projected/35969de4-68f3-4956-bf07-eff642d64df3-kube-api-access-9vvgl\") pod \"must-gather-9z9r4\" (UID: \"35969de4-68f3-4956-bf07-eff642d64df3\") " pod="openshift-must-gather-txx97/must-gather-9z9r4" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.467724 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/35969de4-68f3-4956-bf07-eff642d64df3-must-gather-output\") pod \"must-gather-9z9r4\" (UID: \"35969de4-68f3-4956-bf07-eff642d64df3\") " pod="openshift-must-gather-txx97/must-gather-9z9r4" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.467873 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vvgl\" (UniqueName: \"kubernetes.io/projected/35969de4-68f3-4956-bf07-eff642d64df3-kube-api-access-9vvgl\") pod \"must-gather-9z9r4\" (UID: \"35969de4-68f3-4956-bf07-eff642d64df3\") " pod="openshift-must-gather-txx97/must-gather-9z9r4" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.468766 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/35969de4-68f3-4956-bf07-eff642d64df3-must-gather-output\") pod \"must-gather-9z9r4\" (UID: \"35969de4-68f3-4956-bf07-eff642d64df3\") " pod="openshift-must-gather-txx97/must-gather-9z9r4" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.495045 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vvgl\" (UniqueName: \"kubernetes.io/projected/35969de4-68f3-4956-bf07-eff642d64df3-kube-api-access-9vvgl\") pod \"must-gather-9z9r4\" (UID: \"35969de4-68f3-4956-bf07-eff642d64df3\") " pod="openshift-must-gather-txx97/must-gather-9z9r4" Oct 02 20:06:12 crc kubenswrapper[4832]: I1002 20:06:12.551523 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/must-gather-9z9r4" Oct 02 20:06:13 crc kubenswrapper[4832]: I1002 20:06:13.121177 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-txx97/must-gather-9z9r4"] Oct 02 20:06:13 crc kubenswrapper[4832]: I1002 20:06:13.370692 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txx97/must-gather-9z9r4" event={"ID":"35969de4-68f3-4956-bf07-eff642d64df3","Type":"ContainerStarted","Data":"450d586e024e328e2aacf0567e4e602b7c4b861ed0f90a98f66ee3f36acd274a"} Oct 02 20:06:14 crc kubenswrapper[4832]: I1002 20:06:14.409601 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txx97/must-gather-9z9r4" event={"ID":"35969de4-68f3-4956-bf07-eff642d64df3","Type":"ContainerStarted","Data":"ed94031672f8d64c08b7ad13b02657792c1d9561a8c8e04bfa1608c72f747df9"} Oct 02 20:06:14 crc kubenswrapper[4832]: I1002 20:06:14.410321 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txx97/must-gather-9z9r4" event={"ID":"35969de4-68f3-4956-bf07-eff642d64df3","Type":"ContainerStarted","Data":"ee15f09d0ce1d5e586afcd1b732deb158cf248b15847484f9e8d332eefbf27e9"} Oct 02 20:06:14 crc kubenswrapper[4832]: I1002 20:06:14.432154 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-txx97/must-gather-9z9r4" podStartSLOduration=2.432130486 podStartE2EDuration="2.432130486s" podCreationTimestamp="2025-10-02 20:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:06:14.42161264 +0000 UTC m=+6331.391055512" watchObservedRunningTime="2025-10-02 20:06:14.432130486 +0000 UTC m=+6331.401573358" Oct 02 20:06:16 crc kubenswrapper[4832]: I1002 20:06:16.905612 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-txx97/crc-debug-8wtr8"] Oct 02 20:06:16 crc kubenswrapper[4832]: I1002 20:06:16.908497 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/crc-debug-8wtr8" Oct 02 20:06:16 crc kubenswrapper[4832]: I1002 20:06:16.910611 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-txx97"/"default-dockercfg-kxt6r" Oct 02 20:06:17 crc kubenswrapper[4832]: I1002 20:06:17.084156 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2thn\" (UniqueName: \"kubernetes.io/projected/38fd1917-13b3-4164-bc21-4e00bba6fd5e-kube-api-access-n2thn\") pod \"crc-debug-8wtr8\" (UID: \"38fd1917-13b3-4164-bc21-4e00bba6fd5e\") " pod="openshift-must-gather-txx97/crc-debug-8wtr8" Oct 02 20:06:17 crc kubenswrapper[4832]: I1002 20:06:17.084810 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38fd1917-13b3-4164-bc21-4e00bba6fd5e-host\") pod \"crc-debug-8wtr8\" (UID: \"38fd1917-13b3-4164-bc21-4e00bba6fd5e\") " pod="openshift-must-gather-txx97/crc-debug-8wtr8" Oct 02 20:06:17 crc kubenswrapper[4832]: I1002 20:06:17.186871 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38fd1917-13b3-4164-bc21-4e00bba6fd5e-host\") pod \"crc-debug-8wtr8\" (UID: \"38fd1917-13b3-4164-bc21-4e00bba6fd5e\") " pod="openshift-must-gather-txx97/crc-debug-8wtr8" Oct 02 20:06:17 crc kubenswrapper[4832]: I1002 20:06:17.187050 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2thn\" (UniqueName: \"kubernetes.io/projected/38fd1917-13b3-4164-bc21-4e00bba6fd5e-kube-api-access-n2thn\") pod \"crc-debug-8wtr8\" (UID: \"38fd1917-13b3-4164-bc21-4e00bba6fd5e\") " pod="openshift-must-gather-txx97/crc-debug-8wtr8" Oct 02 20:06:17 crc kubenswrapper[4832]: I1002 20:06:17.187070 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38fd1917-13b3-4164-bc21-4e00bba6fd5e-host\") pod \"crc-debug-8wtr8\" (UID: \"38fd1917-13b3-4164-bc21-4e00bba6fd5e\") " pod="openshift-must-gather-txx97/crc-debug-8wtr8" Oct 02 20:06:17 crc kubenswrapper[4832]: I1002 20:06:17.227969 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2thn\" (UniqueName: \"kubernetes.io/projected/38fd1917-13b3-4164-bc21-4e00bba6fd5e-kube-api-access-n2thn\") pod \"crc-debug-8wtr8\" (UID: \"38fd1917-13b3-4164-bc21-4e00bba6fd5e\") " pod="openshift-must-gather-txx97/crc-debug-8wtr8" Oct 02 20:06:17 crc kubenswrapper[4832]: I1002 20:06:17.239110 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/crc-debug-8wtr8" Oct 02 20:06:17 crc kubenswrapper[4832]: I1002 20:06:17.468861 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txx97/crc-debug-8wtr8" event={"ID":"38fd1917-13b3-4164-bc21-4e00bba6fd5e","Type":"ContainerStarted","Data":"b64d2571b26503d96f498809af65809eb23023f9f3bf80cb22c1278c97b15163"} Oct 02 20:06:18 crc kubenswrapper[4832]: I1002 20:06:18.481511 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txx97/crc-debug-8wtr8" event={"ID":"38fd1917-13b3-4164-bc21-4e00bba6fd5e","Type":"ContainerStarted","Data":"dfd921a7c5b408958e6c41e639578e05dcdf9dcee69e4c110e02bd0200504124"} Oct 02 20:06:18 crc kubenswrapper[4832]: I1002 20:06:18.495305 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-txx97/crc-debug-8wtr8" podStartSLOduration=2.49528308 podStartE2EDuration="2.49528308s" podCreationTimestamp="2025-10-02 20:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:06:18.493285497 +0000 UTC m=+6335.462728389" watchObservedRunningTime="2025-10-02 20:06:18.49528308 +0000 UTC m=+6335.464725962" Oct 02 20:06:18 crc kubenswrapper[4832]: E1002 20:06:18.990302 4832 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.180:39104->38.102.83.180:36377: write tcp 38.102.83.180:39104->38.102.83.180:36377: write: broken pipe Oct 02 20:07:24 crc kubenswrapper[4832]: I1002 20:07:24.755727 4832 scope.go:117] "RemoveContainer" containerID="36fad3bf52d19a29b72b34161016db9c1e273eaa7201a6f863d742ca612d52fb" Oct 02 20:07:40 crc kubenswrapper[4832]: I1002 20:07:40.866529 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_6bd80e3d-9654-4e34-8739-e718f4884c75/aodh-evaluator/0.log" Oct 02 20:07:40 crc kubenswrapper[4832]: I1002 20:07:40.873932 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_6bd80e3d-9654-4e34-8739-e718f4884c75/aodh-api/0.log" Oct 02 20:07:41 crc kubenswrapper[4832]: I1002 20:07:41.031720 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_6bd80e3d-9654-4e34-8739-e718f4884c75/aodh-listener/0.log" Oct 02 20:07:41 crc kubenswrapper[4832]: I1002 20:07:41.080493 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_6bd80e3d-9654-4e34-8739-e718f4884c75/aodh-notifier/0.log" Oct 02 20:07:41 crc kubenswrapper[4832]: I1002 20:07:41.246144 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-575ff4d8db-jrg4j_a65ae528-fb46-44a4-a3a3-543acfb646a9/barbican-api/0.log" Oct 02 20:07:41 crc kubenswrapper[4832]: I1002 20:07:41.313440 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-575ff4d8db-jrg4j_a65ae528-fb46-44a4-a3a3-543acfb646a9/barbican-api-log/0.log" Oct 02 20:07:41 crc kubenswrapper[4832]: I1002 20:07:41.475690 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-855fbd5c98-k2t4b_32db7ef2-6bb9-4834-9c9d-3bb13309b0e9/barbican-keystone-listener/0.log" Oct 02 20:07:41 crc kubenswrapper[4832]: I1002 20:07:41.656645 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-855fbd5c98-k2t4b_32db7ef2-6bb9-4834-9c9d-3bb13309b0e9/barbican-keystone-listener-log/0.log" Oct 02 20:07:41 crc kubenswrapper[4832]: I1002 20:07:41.710892 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fd9c9bb87-rf75p_f944fb96-3cf4-42b3-b5b8-3da8dc107d7c/barbican-worker/0.log" Oct 02 20:07:41 crc kubenswrapper[4832]: I1002 20:07:41.851967 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5fd9c9bb87-rf75p_f944fb96-3cf4-42b3-b5b8-3da8dc107d7c/barbican-worker-log/0.log" Oct 02 20:07:41 crc kubenswrapper[4832]: I1002 20:07:41.993541 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-65bmd_92baed54-227c-474f-ad5c-b8c14493d2d5/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:42 crc kubenswrapper[4832]: I1002 20:07:42.260937 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8a0ac381-9d1a-4068-b5bb-350b3979485e/ceilometer-notification-agent/0.log" Oct 02 20:07:42 crc kubenswrapper[4832]: I1002 20:07:42.308603 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8a0ac381-9d1a-4068-b5bb-350b3979485e/ceilometer-central-agent/0.log" Oct 02 20:07:42 crc kubenswrapper[4832]: I1002 20:07:42.408774 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8a0ac381-9d1a-4068-b5bb-350b3979485e/proxy-httpd/0.log" Oct 02 20:07:42 crc kubenswrapper[4832]: I1002 20:07:42.499971 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8a0ac381-9d1a-4068-b5bb-350b3979485e/sg-core/0.log" Oct 02 20:07:42 crc kubenswrapper[4832]: I1002 20:07:42.708644 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_528718cd-4242-48d1-be69-6637022d4c84/cinder-api/0.log" Oct 02 20:07:42 crc kubenswrapper[4832]: I1002 20:07:42.741492 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_528718cd-4242-48d1-be69-6637022d4c84/cinder-api-log/0.log" Oct 02 20:07:42 crc kubenswrapper[4832]: I1002 20:07:42.955931 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c154f010-097e-4cd5-8833-798bce95b715/cinder-scheduler/0.log" Oct 02 20:07:43 crc kubenswrapper[4832]: I1002 20:07:43.035130 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c154f010-097e-4cd5-8833-798bce95b715/probe/0.log" Oct 02 20:07:43 crc kubenswrapper[4832]: I1002 20:07:43.169439 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-px9xz_5610cb4e-4f23-4a76-b59c-5e3db6b532ff/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:43 crc kubenswrapper[4832]: I1002 20:07:43.409417 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-fznkg_691f5920-3afd-4cf0-8ccb-61d2bbff10c2/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:43 crc kubenswrapper[4832]: I1002 20:07:43.556078 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-xtftr_8b8c6e59-47c8-4051-a398-3f3d6739d15d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:43 crc kubenswrapper[4832]: I1002 20:07:43.694466 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-qvcnk_2e85de4e-7cb3-48e0-86f7-3faaf7e067d1/init/0.log" Oct 02 20:07:43 crc kubenswrapper[4832]: I1002 20:07:43.895014 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-qvcnk_2e85de4e-7cb3-48e0-86f7-3faaf7e067d1/init/0.log" Oct 02 20:07:43 crc kubenswrapper[4832]: I1002 20:07:43.927606 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-qvcnk_2e85de4e-7cb3-48e0-86f7-3faaf7e067d1/dnsmasq-dns/0.log" Oct 02 20:07:44 crc kubenswrapper[4832]: I1002 20:07:44.082300 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sp4pm_6f816324-d6b7-4ca1-bd57-f3e9e7e6a7d5/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:44 crc kubenswrapper[4832]: I1002 20:07:44.136311 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b7d0ad2c-59e0-4aee-930a-560d811c393c/glance-log/0.log" Oct 02 20:07:44 crc kubenswrapper[4832]: I1002 20:07:44.158595 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b7d0ad2c-59e0-4aee-930a-560d811c393c/glance-httpd/0.log" Oct 02 20:07:44 crc kubenswrapper[4832]: I1002 20:07:44.340487 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b3190ea6-2c6f-4fb9-a33a-768462224416/glance-httpd/0.log" Oct 02 20:07:44 crc kubenswrapper[4832]: I1002 20:07:44.376027 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b3190ea6-2c6f-4fb9-a33a-768462224416/glance-log/0.log" Oct 02 20:07:44 crc kubenswrapper[4832]: I1002 20:07:44.894678 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5779d8467c-rr8wn_fb6c24b8-fca2-49c2-8f1c-a41614962b83/heat-engine/0.log" Oct 02 20:07:45 crc kubenswrapper[4832]: I1002 20:07:45.136514 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qsv5t_61f0ae54-7250-4cc8-9b15-10d1be6c5d31/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:45 crc kubenswrapper[4832]: I1002 20:07:45.323220 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5b65db8df4-nckpl_be959889-fe35-4de3-b7b2-82df67812b7d/heat-api/0.log" Oct 02 20:07:45 crc kubenswrapper[4832]: I1002 20:07:45.379516 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-47m5t_a8da994a-7b15-400a-8316-27a8c28cafe1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:45 crc kubenswrapper[4832]: I1002 20:07:45.419554 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-698cc5cc6c-gmw7p_138ff508-ca7b-4291-8f0d-90ddc11770fb/heat-cfnapi/0.log" Oct 02 20:07:45 crc kubenswrapper[4832]: I1002 20:07:45.660396 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29323861-lrrgj_d89bc766-c21f-4c7e-a092-3e1db2ed4c9d/keystone-cron/0.log" Oct 02 20:07:45 crc kubenswrapper[4832]: I1002 20:07:45.903234 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29323921-bnpxq_7b64de45-584e-449a-9bfb-85d7b5ad5879/keystone-cron/0.log" Oct 02 20:07:45 crc kubenswrapper[4832]: I1002 20:07:45.993020 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ecd10228-8f4c-46ea-946d-838bc37b46cc/kube-state-metrics/0.log" Oct 02 20:07:46 crc kubenswrapper[4832]: I1002 20:07:46.067985 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-54ddcb9945-p7pkt_e632994f-7397-4c6f-950a-bcdff946d4e2/keystone-api/0.log" Oct 02 20:07:46 crc kubenswrapper[4832]: I1002 20:07:46.275082 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-r9c88_9e0f6923-879e-41f9-9c8b-f0cfede7221f/logging-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:46 crc kubenswrapper[4832]: I1002 20:07:46.283939 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-6pgg8_087d2e23-e74a-45de-baf2-2ed44a358880/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:46 crc kubenswrapper[4832]: I1002 20:07:46.520752 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_6888060d-2a19-41ee-ac4d-06a28c11a0f6/mysqld-exporter/0.log" Oct 02 20:07:47 crc kubenswrapper[4832]: I1002 20:07:47.025079 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68769b5c9-9g8wt_eba53986-08b2-4e79-b3d9-85367ff7d816/neutron-api/0.log" Oct 02 20:07:47 crc kubenswrapper[4832]: I1002 20:07:47.059423 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68769b5c9-9g8wt_eba53986-08b2-4e79-b3d9-85367ff7d816/neutron-httpd/0.log" Oct 02 20:07:47 crc kubenswrapper[4832]: I1002 20:07:47.247501 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-jj5cw_09555253-1acb-4af2-a44c-a2a5612465ff/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:48 crc kubenswrapper[4832]: I1002 20:07:48.082280 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_02307835-a3c7-4dc6-add1-8c9a6daab69d/nova-api-log/0.log" Oct 02 20:07:48 crc kubenswrapper[4832]: I1002 20:07:48.091347 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_46d668ae-13cf-4e3f-a2c4-8b862cdeafcb/nova-cell0-conductor-conductor/0.log" Oct 02 20:07:48 crc kubenswrapper[4832]: I1002 20:07:48.495081 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8657ca8f-f47b-476a-96f0-b5f5c313cb61/nova-cell1-conductor-conductor/0.log" Oct 02 20:07:48 crc kubenswrapper[4832]: I1002 20:07:48.793792 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_02307835-a3c7-4dc6-add1-8c9a6daab69d/nova-api-api/0.log" Oct 02 20:07:48 crc kubenswrapper[4832]: I1002 20:07:48.904150 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ce0ea362-776c-4b12-b3b6-9f684521d40f/nova-cell1-novncproxy-novncproxy/0.log" Oct 02 20:07:49 crc kubenswrapper[4832]: I1002 20:07:49.122368 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-l5mp2_26e8352e-0e5b-4ee9-83f5-aa3323948a6d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:49 crc kubenswrapper[4832]: I1002 20:07:49.242923 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ca63490c-e0ae-4fc3-89cc-f20f8810c98c/nova-metadata-log/0.log" Oct 02 20:07:49 crc kubenswrapper[4832]: I1002 20:07:49.801081 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_8f93334a-ea76-42f6-9f67-0788fac06f14/nova-scheduler-scheduler/0.log" Oct 02 20:07:49 crc kubenswrapper[4832]: I1002 20:07:49.911634 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3e9a3d78-f055-43d2-9d21-579d4a611d49/mysql-bootstrap/0.log" Oct 02 20:07:50 crc kubenswrapper[4832]: I1002 20:07:50.101752 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3e9a3d78-f055-43d2-9d21-579d4a611d49/galera/0.log" Oct 02 20:07:50 crc kubenswrapper[4832]: I1002 20:07:50.123050 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_3e9a3d78-f055-43d2-9d21-579d4a611d49/mysql-bootstrap/0.log" Oct 02 20:07:50 crc kubenswrapper[4832]: I1002 20:07:50.343642 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d6c6d1dc-36df-4b33-8d10-dde52bd65630/mysql-bootstrap/0.log" Oct 02 20:07:50 crc kubenswrapper[4832]: I1002 20:07:50.613300 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d6c6d1dc-36df-4b33-8d10-dde52bd65630/mysql-bootstrap/0.log" Oct 02 20:07:50 crc kubenswrapper[4832]: I1002 20:07:50.639695 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d6c6d1dc-36df-4b33-8d10-dde52bd65630/galera/0.log" Oct 02 20:07:50 crc kubenswrapper[4832]: I1002 20:07:50.846185 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ec4cba1f-e0b4-4901-add4-513dc675408e/openstackclient/0.log" Oct 02 20:07:51 crc kubenswrapper[4832]: I1002 20:07:51.173946 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-6trqf_3533b085-2264-41c9-8feb-d8c6f40fa6c1/ovn-controller/0.log" Oct 02 20:07:51 crc kubenswrapper[4832]: I1002 20:07:51.442105 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-d5thb_cdf2a425-f35e-436a-ad17-c85f29e03490/openstack-network-exporter/0.log" Oct 02 20:07:51 crc kubenswrapper[4832]: I1002 20:07:51.661280 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g6w9z_37ac149f-65bb-4e89-911e-52f0c2434aad/ovsdb-server-init/0.log" Oct 02 20:07:51 crc kubenswrapper[4832]: I1002 20:07:51.889985 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g6w9z_37ac149f-65bb-4e89-911e-52f0c2434aad/ovsdb-server-init/0.log" Oct 02 20:07:51 crc kubenswrapper[4832]: I1002 20:07:51.934167 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g6w9z_37ac149f-65bb-4e89-911e-52f0c2434aad/ovs-vswitchd/0.log" Oct 02 20:07:51 crc kubenswrapper[4832]: I1002 20:07:51.935728 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ca63490c-e0ae-4fc3-89cc-f20f8810c98c/nova-metadata-metadata/0.log" Oct 02 20:07:52 crc kubenswrapper[4832]: I1002 20:07:52.102798 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-g6w9z_37ac149f-65bb-4e89-911e-52f0c2434aad/ovsdb-server/0.log" Oct 02 20:07:52 crc kubenswrapper[4832]: I1002 20:07:52.200406 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jwffh_6fd38150-7f6c-4ed2-ba9f-5f5c312ffd1d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:52 crc kubenswrapper[4832]: I1002 20:07:52.413547 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_85cf9359-d7f1-4634-9421-0dffdfb488e0/openstack-network-exporter/0.log" Oct 02 20:07:52 crc kubenswrapper[4832]: I1002 20:07:52.427771 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_85cf9359-d7f1-4634-9421-0dffdfb488e0/ovn-northd/0.log" Oct 02 20:07:52 crc kubenswrapper[4832]: I1002 20:07:52.660595 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ccf82d19-ed89-43fc-b2e0-5b8d871db17a/openstack-network-exporter/0.log" Oct 02 20:07:52 crc kubenswrapper[4832]: I1002 20:07:52.715699 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ccf82d19-ed89-43fc-b2e0-5b8d871db17a/ovsdbserver-nb/0.log" Oct 02 20:07:52 crc kubenswrapper[4832]: I1002 20:07:52.893362 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_04d55a7f-36c2-4f79-9541-3e0bf14963ca/openstack-network-exporter/0.log" Oct 02 20:07:52 crc kubenswrapper[4832]: I1002 20:07:52.953816 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_04d55a7f-36c2-4f79-9541-3e0bf14963ca/ovsdbserver-sb/0.log" Oct 02 20:07:53 crc kubenswrapper[4832]: I1002 20:07:53.212985 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7975695b86-g5x7n_9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f/placement-api/0.log" Oct 02 20:07:53 crc kubenswrapper[4832]: I1002 20:07:53.370482 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7975695b86-g5x7n_9b786a5b-55e1-4e5b-aa0d-fe00ae8f524f/placement-log/0.log" Oct 02 20:07:53 crc kubenswrapper[4832]: I1002 20:07:53.487216 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_091b8e1f-4994-4bc6-8be4-c5a44668e088/init-config-reloader/0.log" Oct 02 20:07:53 crc kubenswrapper[4832]: I1002 20:07:53.679565 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_091b8e1f-4994-4bc6-8be4-c5a44668e088/config-reloader/0.log" Oct 02 20:07:53 crc kubenswrapper[4832]: I1002 20:07:53.681419 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_091b8e1f-4994-4bc6-8be4-c5a44668e088/prometheus/0.log" Oct 02 20:07:53 crc kubenswrapper[4832]: I1002 20:07:53.689640 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_091b8e1f-4994-4bc6-8be4-c5a44668e088/init-config-reloader/0.log" Oct 02 20:07:53 crc kubenswrapper[4832]: I1002 20:07:53.853595 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_091b8e1f-4994-4bc6-8be4-c5a44668e088/thanos-sidecar/0.log" Oct 02 20:07:53 crc kubenswrapper[4832]: I1002 20:07:53.944144 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c87efd10-3959-4dfa-ab6a-88810fe9a0fa/setup-container/0.log" Oct 02 20:07:54 crc kubenswrapper[4832]: I1002 20:07:54.113716 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c87efd10-3959-4dfa-ab6a-88810fe9a0fa/setup-container/0.log" Oct 02 20:07:54 crc kubenswrapper[4832]: I1002 20:07:54.239174 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c87efd10-3959-4dfa-ab6a-88810fe9a0fa/rabbitmq/0.log" Oct 02 20:07:54 crc kubenswrapper[4832]: I1002 20:07:54.366356 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9ab42783-2e22-4b2f-9fab-be96ba65e345/setup-container/0.log" Oct 02 20:07:54 crc kubenswrapper[4832]: I1002 20:07:54.542946 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9ab42783-2e22-4b2f-9fab-be96ba65e345/setup-container/0.log" Oct 02 20:07:54 crc kubenswrapper[4832]: I1002 20:07:54.569119 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9ab42783-2e22-4b2f-9fab-be96ba65e345/rabbitmq/0.log" Oct 02 20:07:54 crc kubenswrapper[4832]: I1002 20:07:54.766985 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-rkjnh_c2a3b6ac-1648-4bfa-ab5f-d42c6e580c5d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:54 crc kubenswrapper[4832]: I1002 20:07:54.860564 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xzpxr_5cf50fba-3a89-451f-adfe-f64eb401d544/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:55 crc kubenswrapper[4832]: I1002 20:07:55.065768 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mfvch_4f9739db-9008-4848-bbc0-ddaa4da9c9b8/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:55 crc kubenswrapper[4832]: I1002 20:07:55.228109 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-6jxlc_d5518272-a1ba-495e-8634-43ce4c08d705/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:55 crc kubenswrapper[4832]: I1002 20:07:55.296081 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-d4vj4_03a71b8f-0cad-40ab-8092-51c6e380b13d/ssh-known-hosts-edpm-deployment/0.log" Oct 02 20:07:55 crc kubenswrapper[4832]: I1002 20:07:55.594349 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5f6f67fd59-pbxsj_cffb41da-c1fe-465d-8ddc-9df65cc50a51/proxy-server/0.log" Oct 02 20:07:55 crc kubenswrapper[4832]: I1002 20:07:55.825540 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-7zhzt_3f58f07d-fb3b-4be8-a9b0-221aa5c01316/swift-ring-rebalance/0.log" Oct 02 20:07:55 crc kubenswrapper[4832]: I1002 20:07:55.849980 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5f6f67fd59-pbxsj_cffb41da-c1fe-465d-8ddc-9df65cc50a51/proxy-httpd/0.log" Oct 02 20:07:56 crc kubenswrapper[4832]: I1002 20:07:56.023994 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/account-auditor/0.log" Oct 02 20:07:56 crc kubenswrapper[4832]: I1002 20:07:56.089417 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/account-reaper/0.log" Oct 02 20:07:56 crc kubenswrapper[4832]: I1002 20:07:56.291852 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/account-replicator/0.log" Oct 02 20:07:56 crc kubenswrapper[4832]: I1002 20:07:56.333466 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/account-server/0.log" Oct 02 20:07:56 crc kubenswrapper[4832]: I1002 20:07:56.358598 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/container-auditor/0.log" Oct 02 20:07:56 crc kubenswrapper[4832]: I1002 20:07:56.520552 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/container-replicator/0.log" Oct 02 20:07:56 crc kubenswrapper[4832]: I1002 20:07:56.554965 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/container-server/0.log" Oct 02 20:07:56 crc kubenswrapper[4832]: I1002 20:07:56.570483 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/container-updater/0.log" Oct 02 20:07:56 crc kubenswrapper[4832]: I1002 20:07:56.713909 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/object-auditor/0.log" Oct 02 20:07:56 crc kubenswrapper[4832]: I1002 20:07:56.806260 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/object-expirer/0.log" Oct 02 20:07:56 crc kubenswrapper[4832]: I1002 20:07:56.809125 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/object-replicator/0.log" Oct 02 20:07:56 crc kubenswrapper[4832]: I1002 20:07:56.939071 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/object-server/0.log" Oct 02 20:07:57 crc kubenswrapper[4832]: I1002 20:07:57.025210 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/object-updater/0.log" Oct 02 20:07:57 crc kubenswrapper[4832]: I1002 20:07:57.026496 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/rsync/0.log" Oct 02 20:07:57 crc kubenswrapper[4832]: I1002 20:07:57.152630 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_df7b8400-95d5-481a-a9a1-d5b2586f159f/swift-recon-cron/0.log" Oct 02 20:07:57 crc kubenswrapper[4832]: I1002 20:07:57.315580 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-8wjw4_29281442-d5d6-4c9c-b24d-82c29d04990e/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:57 crc kubenswrapper[4832]: I1002 20:07:57.561649 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-fmldg_0afc2c94-7e28-4344-b4be-807607a5c0e4/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:57 crc kubenswrapper[4832]: I1002 20:07:57.783205 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_29704afa-00e2-4921-92a4-9fe6f0d9e6e5/test-operator-logs-container/0.log" Oct 02 20:07:58 crc kubenswrapper[4832]: I1002 20:07:58.172644 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vvxbf_06a947a2-8fbe-4cf3-84d5-cf24e83a6e30/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 02 20:07:58 crc kubenswrapper[4832]: I1002 20:07:58.362197 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_040c96d0-9636-499a-9986-fb79a73e7b2d/tempest-tests-tempest-tests-runner/0.log" Oct 02 20:08:01 crc kubenswrapper[4832]: I1002 20:08:01.914659 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_84630b52-3d82-4ca3-aa26-0bf1b7ead64d/memcached/0.log" Oct 02 20:08:26 crc kubenswrapper[4832]: I1002 20:08:26.877768 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 20:08:26 crc kubenswrapper[4832]: I1002 20:08:26.881875 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 20:08:56 crc kubenswrapper[4832]: I1002 20:08:56.876027 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 20:08:56 crc kubenswrapper[4832]: I1002 20:08:56.876670 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 20:08:58 crc kubenswrapper[4832]: I1002 20:08:58.389381 4832 generic.go:334] "Generic (PLEG): container finished" podID="38fd1917-13b3-4164-bc21-4e00bba6fd5e" containerID="dfd921a7c5b408958e6c41e639578e05dcdf9dcee69e4c110e02bd0200504124" exitCode=0 Oct 02 20:08:58 crc kubenswrapper[4832]: I1002 20:08:58.390095 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txx97/crc-debug-8wtr8" event={"ID":"38fd1917-13b3-4164-bc21-4e00bba6fd5e","Type":"ContainerDied","Data":"dfd921a7c5b408958e6c41e639578e05dcdf9dcee69e4c110e02bd0200504124"} Oct 02 20:08:59 crc kubenswrapper[4832]: I1002 20:08:59.547981 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/crc-debug-8wtr8" Oct 02 20:08:59 crc kubenswrapper[4832]: I1002 20:08:59.594938 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-txx97/crc-debug-8wtr8"] Oct 02 20:08:59 crc kubenswrapper[4832]: I1002 20:08:59.606188 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-txx97/crc-debug-8wtr8"] Oct 02 20:08:59 crc kubenswrapper[4832]: I1002 20:08:59.678081 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2thn\" (UniqueName: \"kubernetes.io/projected/38fd1917-13b3-4164-bc21-4e00bba6fd5e-kube-api-access-n2thn\") pod \"38fd1917-13b3-4164-bc21-4e00bba6fd5e\" (UID: \"38fd1917-13b3-4164-bc21-4e00bba6fd5e\") " Oct 02 20:08:59 crc kubenswrapper[4832]: I1002 20:08:59.678392 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38fd1917-13b3-4164-bc21-4e00bba6fd5e-host\") pod \"38fd1917-13b3-4164-bc21-4e00bba6fd5e\" (UID: \"38fd1917-13b3-4164-bc21-4e00bba6fd5e\") " Oct 02 20:08:59 crc kubenswrapper[4832]: I1002 20:08:59.679037 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38fd1917-13b3-4164-bc21-4e00bba6fd5e-host" (OuterVolumeSpecName: "host") pod "38fd1917-13b3-4164-bc21-4e00bba6fd5e" (UID: "38fd1917-13b3-4164-bc21-4e00bba6fd5e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 20:08:59 crc kubenswrapper[4832]: I1002 20:08:59.686879 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fd1917-13b3-4164-bc21-4e00bba6fd5e-kube-api-access-n2thn" (OuterVolumeSpecName: "kube-api-access-n2thn") pod "38fd1917-13b3-4164-bc21-4e00bba6fd5e" (UID: "38fd1917-13b3-4164-bc21-4e00bba6fd5e"). InnerVolumeSpecName "kube-api-access-n2thn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:08:59 crc kubenswrapper[4832]: I1002 20:08:59.781167 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2thn\" (UniqueName: \"kubernetes.io/projected/38fd1917-13b3-4164-bc21-4e00bba6fd5e-kube-api-access-n2thn\") on node \"crc\" DevicePath \"\"" Oct 02 20:08:59 crc kubenswrapper[4832]: I1002 20:08:59.781199 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38fd1917-13b3-4164-bc21-4e00bba6fd5e-host\") on node \"crc\" DevicePath \"\"" Oct 02 20:09:00 crc kubenswrapper[4832]: I1002 20:09:00.425931 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b64d2571b26503d96f498809af65809eb23023f9f3bf80cb22c1278c97b15163" Oct 02 20:09:00 crc kubenswrapper[4832]: I1002 20:09:00.426018 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/crc-debug-8wtr8" Oct 02 20:09:00 crc kubenswrapper[4832]: I1002 20:09:00.827804 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-txx97/crc-debug-9j6ht"] Oct 02 20:09:00 crc kubenswrapper[4832]: E1002 20:09:00.830489 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fd1917-13b3-4164-bc21-4e00bba6fd5e" containerName="container-00" Oct 02 20:09:00 crc kubenswrapper[4832]: I1002 20:09:00.830530 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fd1917-13b3-4164-bc21-4e00bba6fd5e" containerName="container-00" Oct 02 20:09:00 crc kubenswrapper[4832]: I1002 20:09:00.831795 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fd1917-13b3-4164-bc21-4e00bba6fd5e" containerName="container-00" Oct 02 20:09:00 crc kubenswrapper[4832]: I1002 20:09:00.833507 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/crc-debug-9j6ht" Oct 02 20:09:00 crc kubenswrapper[4832]: I1002 20:09:00.837626 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-txx97"/"default-dockercfg-kxt6r" Oct 02 20:09:00 crc kubenswrapper[4832]: I1002 20:09:00.910428 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn5p6\" (UniqueName: \"kubernetes.io/projected/a2a55c92-a7b6-4b14-bee2-1ec65ee055f0-kube-api-access-gn5p6\") pod \"crc-debug-9j6ht\" (UID: \"a2a55c92-a7b6-4b14-bee2-1ec65ee055f0\") " pod="openshift-must-gather-txx97/crc-debug-9j6ht" Oct 02 20:09:00 crc kubenswrapper[4832]: I1002 20:09:00.910490 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2a55c92-a7b6-4b14-bee2-1ec65ee055f0-host\") pod \"crc-debug-9j6ht\" (UID: \"a2a55c92-a7b6-4b14-bee2-1ec65ee055f0\") " pod="openshift-must-gather-txx97/crc-debug-9j6ht" Oct 02 20:09:01 crc kubenswrapper[4832]: I1002 20:09:01.012689 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2a55c92-a7b6-4b14-bee2-1ec65ee055f0-host\") pod \"crc-debug-9j6ht\" (UID: \"a2a55c92-a7b6-4b14-bee2-1ec65ee055f0\") " pod="openshift-must-gather-txx97/crc-debug-9j6ht" Oct 02 20:09:01 crc kubenswrapper[4832]: I1002 20:09:01.012873 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2a55c92-a7b6-4b14-bee2-1ec65ee055f0-host\") pod \"crc-debug-9j6ht\" (UID: \"a2a55c92-a7b6-4b14-bee2-1ec65ee055f0\") " pod="openshift-must-gather-txx97/crc-debug-9j6ht" Oct 02 20:09:01 crc kubenswrapper[4832]: I1002 20:09:01.013955 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn5p6\" (UniqueName: \"kubernetes.io/projected/a2a55c92-a7b6-4b14-bee2-1ec65ee055f0-kube-api-access-gn5p6\") pod \"crc-debug-9j6ht\" (UID: \"a2a55c92-a7b6-4b14-bee2-1ec65ee055f0\") " pod="openshift-must-gather-txx97/crc-debug-9j6ht" Oct 02 20:09:01 crc kubenswrapper[4832]: I1002 20:09:01.050467 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn5p6\" (UniqueName: \"kubernetes.io/projected/a2a55c92-a7b6-4b14-bee2-1ec65ee055f0-kube-api-access-gn5p6\") pod \"crc-debug-9j6ht\" (UID: \"a2a55c92-a7b6-4b14-bee2-1ec65ee055f0\") " pod="openshift-must-gather-txx97/crc-debug-9j6ht" Oct 02 20:09:01 crc kubenswrapper[4832]: I1002 20:09:01.160754 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/crc-debug-9j6ht" Oct 02 20:09:01 crc kubenswrapper[4832]: I1002 20:09:01.253642 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38fd1917-13b3-4164-bc21-4e00bba6fd5e" path="/var/lib/kubelet/pods/38fd1917-13b3-4164-bc21-4e00bba6fd5e/volumes" Oct 02 20:09:01 crc kubenswrapper[4832]: I1002 20:09:01.435896 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txx97/crc-debug-9j6ht" event={"ID":"a2a55c92-a7b6-4b14-bee2-1ec65ee055f0","Type":"ContainerStarted","Data":"196a3637b4c752643221276494fe22f7a3e8f677104f80a359b83a3d26d57c67"} Oct 02 20:09:02 crc kubenswrapper[4832]: I1002 20:09:02.448781 4832 generic.go:334] "Generic (PLEG): container finished" podID="a2a55c92-a7b6-4b14-bee2-1ec65ee055f0" containerID="c403e70c2b0edba9473f8540fd49becfe4fb7f998b2beedfbddc7f69a2ba77d5" exitCode=0 Oct 02 20:09:02 crc kubenswrapper[4832]: I1002 20:09:02.448851 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txx97/crc-debug-9j6ht" event={"ID":"a2a55c92-a7b6-4b14-bee2-1ec65ee055f0","Type":"ContainerDied","Data":"c403e70c2b0edba9473f8540fd49becfe4fb7f998b2beedfbddc7f69a2ba77d5"} Oct 02 20:09:03 crc kubenswrapper[4832]: I1002 20:09:03.612989 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/crc-debug-9j6ht" Oct 02 20:09:03 crc kubenswrapper[4832]: I1002 20:09:03.706413 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2a55c92-a7b6-4b14-bee2-1ec65ee055f0-host\") pod \"a2a55c92-a7b6-4b14-bee2-1ec65ee055f0\" (UID: \"a2a55c92-a7b6-4b14-bee2-1ec65ee055f0\") " Oct 02 20:09:03 crc kubenswrapper[4832]: I1002 20:09:03.706574 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2a55c92-a7b6-4b14-bee2-1ec65ee055f0-host" (OuterVolumeSpecName: "host") pod "a2a55c92-a7b6-4b14-bee2-1ec65ee055f0" (UID: "a2a55c92-a7b6-4b14-bee2-1ec65ee055f0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 20:09:03 crc kubenswrapper[4832]: I1002 20:09:03.706638 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn5p6\" (UniqueName: \"kubernetes.io/projected/a2a55c92-a7b6-4b14-bee2-1ec65ee055f0-kube-api-access-gn5p6\") pod \"a2a55c92-a7b6-4b14-bee2-1ec65ee055f0\" (UID: \"a2a55c92-a7b6-4b14-bee2-1ec65ee055f0\") " Oct 02 20:09:03 crc kubenswrapper[4832]: I1002 20:09:03.707447 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a2a55c92-a7b6-4b14-bee2-1ec65ee055f0-host\") on node \"crc\" DevicePath \"\"" Oct 02 20:09:03 crc kubenswrapper[4832]: I1002 20:09:03.713301 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2a55c92-a7b6-4b14-bee2-1ec65ee055f0-kube-api-access-gn5p6" (OuterVolumeSpecName: "kube-api-access-gn5p6") pod "a2a55c92-a7b6-4b14-bee2-1ec65ee055f0" (UID: "a2a55c92-a7b6-4b14-bee2-1ec65ee055f0"). InnerVolumeSpecName "kube-api-access-gn5p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:09:03 crc kubenswrapper[4832]: I1002 20:09:03.811723 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn5p6\" (UniqueName: \"kubernetes.io/projected/a2a55c92-a7b6-4b14-bee2-1ec65ee055f0-kube-api-access-gn5p6\") on node \"crc\" DevicePath \"\"" Oct 02 20:09:04 crc kubenswrapper[4832]: I1002 20:09:04.475791 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txx97/crc-debug-9j6ht" event={"ID":"a2a55c92-a7b6-4b14-bee2-1ec65ee055f0","Type":"ContainerDied","Data":"196a3637b4c752643221276494fe22f7a3e8f677104f80a359b83a3d26d57c67"} Oct 02 20:09:04 crc kubenswrapper[4832]: I1002 20:09:04.476161 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="196a3637b4c752643221276494fe22f7a3e8f677104f80a359b83a3d26d57c67" Oct 02 20:09:04 crc kubenswrapper[4832]: I1002 20:09:04.475928 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/crc-debug-9j6ht" Oct 02 20:09:12 crc kubenswrapper[4832]: I1002 20:09:12.854364 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-txx97/crc-debug-9j6ht"] Oct 02 20:09:12 crc kubenswrapper[4832]: I1002 20:09:12.863473 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-txx97/crc-debug-9j6ht"] Oct 02 20:09:13 crc kubenswrapper[4832]: I1002 20:09:13.251995 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2a55c92-a7b6-4b14-bee2-1ec65ee055f0" path="/var/lib/kubelet/pods/a2a55c92-a7b6-4b14-bee2-1ec65ee055f0/volumes" Oct 02 20:09:14 crc kubenswrapper[4832]: I1002 20:09:14.035416 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-txx97/crc-debug-8p2cw"] Oct 02 20:09:14 crc kubenswrapper[4832]: E1002 20:09:14.036420 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a55c92-a7b6-4b14-bee2-1ec65ee055f0" containerName="container-00" Oct 02 20:09:14 crc kubenswrapper[4832]: I1002 20:09:14.036437 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a55c92-a7b6-4b14-bee2-1ec65ee055f0" containerName="container-00" Oct 02 20:09:14 crc kubenswrapper[4832]: I1002 20:09:14.036760 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a55c92-a7b6-4b14-bee2-1ec65ee055f0" containerName="container-00" Oct 02 20:09:14 crc kubenswrapper[4832]: I1002 20:09:14.039031 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/crc-debug-8p2cw" Oct 02 20:09:14 crc kubenswrapper[4832]: I1002 20:09:14.047000 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-txx97"/"default-dockercfg-kxt6r" Oct 02 20:09:14 crc kubenswrapper[4832]: I1002 20:09:14.070213 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmttl\" (UniqueName: \"kubernetes.io/projected/a7c18d34-96eb-4131-b058-559d3f243433-kube-api-access-hmttl\") pod \"crc-debug-8p2cw\" (UID: \"a7c18d34-96eb-4131-b058-559d3f243433\") " pod="openshift-must-gather-txx97/crc-debug-8p2cw" Oct 02 20:09:14 crc kubenswrapper[4832]: I1002 20:09:14.070385 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7c18d34-96eb-4131-b058-559d3f243433-host\") pod \"crc-debug-8p2cw\" (UID: \"a7c18d34-96eb-4131-b058-559d3f243433\") " pod="openshift-must-gather-txx97/crc-debug-8p2cw" Oct 02 20:09:14 crc kubenswrapper[4832]: I1002 20:09:14.173978 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7c18d34-96eb-4131-b058-559d3f243433-host\") pod \"crc-debug-8p2cw\" (UID: \"a7c18d34-96eb-4131-b058-559d3f243433\") " pod="openshift-must-gather-txx97/crc-debug-8p2cw" Oct 02 20:09:14 crc kubenswrapper[4832]: I1002 20:09:14.174227 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmttl\" (UniqueName: \"kubernetes.io/projected/a7c18d34-96eb-4131-b058-559d3f243433-kube-api-access-hmttl\") pod \"crc-debug-8p2cw\" (UID: \"a7c18d34-96eb-4131-b058-559d3f243433\") " pod="openshift-must-gather-txx97/crc-debug-8p2cw" Oct 02 20:09:14 crc kubenswrapper[4832]: I1002 20:09:14.174253 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7c18d34-96eb-4131-b058-559d3f243433-host\") pod \"crc-debug-8p2cw\" (UID: \"a7c18d34-96eb-4131-b058-559d3f243433\") " pod="openshift-must-gather-txx97/crc-debug-8p2cw" Oct 02 20:09:14 crc kubenswrapper[4832]: I1002 20:09:14.193284 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmttl\" (UniqueName: \"kubernetes.io/projected/a7c18d34-96eb-4131-b058-559d3f243433-kube-api-access-hmttl\") pod \"crc-debug-8p2cw\" (UID: \"a7c18d34-96eb-4131-b058-559d3f243433\") " pod="openshift-must-gather-txx97/crc-debug-8p2cw" Oct 02 20:09:14 crc kubenswrapper[4832]: I1002 20:09:14.362519 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/crc-debug-8p2cw" Oct 02 20:09:14 crc kubenswrapper[4832]: W1002 20:09:14.407964 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7c18d34_96eb_4131_b058_559d3f243433.slice/crio-56a81c99ae9f1e2c2925bcc78d25a796d6f76b8defedc809b434c477bcc20b47 WatchSource:0}: Error finding container 56a81c99ae9f1e2c2925bcc78d25a796d6f76b8defedc809b434c477bcc20b47: Status 404 returned error can't find the container with id 56a81c99ae9f1e2c2925bcc78d25a796d6f76b8defedc809b434c477bcc20b47 Oct 02 20:09:14 crc kubenswrapper[4832]: I1002 20:09:14.579208 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txx97/crc-debug-8p2cw" event={"ID":"a7c18d34-96eb-4131-b058-559d3f243433","Type":"ContainerStarted","Data":"56a81c99ae9f1e2c2925bcc78d25a796d6f76b8defedc809b434c477bcc20b47"} Oct 02 20:09:15 crc kubenswrapper[4832]: I1002 20:09:15.592957 4832 generic.go:334] "Generic (PLEG): container finished" podID="a7c18d34-96eb-4131-b058-559d3f243433" containerID="af669627ff7cafbfb0f25697f73254d25cf576f3ff5eb83df6145d0b5912f841" exitCode=0 Oct 02 20:09:15 crc kubenswrapper[4832]: I1002 20:09:15.593024 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txx97/crc-debug-8p2cw" event={"ID":"a7c18d34-96eb-4131-b058-559d3f243433","Type":"ContainerDied","Data":"af669627ff7cafbfb0f25697f73254d25cf576f3ff5eb83df6145d0b5912f841"} Oct 02 20:09:15 crc kubenswrapper[4832]: I1002 20:09:15.637763 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-txx97/crc-debug-8p2cw"] Oct 02 20:09:15 crc kubenswrapper[4832]: I1002 20:09:15.647664 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-txx97/crc-debug-8p2cw"] Oct 02 20:09:16 crc kubenswrapper[4832]: I1002 20:09:16.734543 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/crc-debug-8p2cw" Oct 02 20:09:16 crc kubenswrapper[4832]: I1002 20:09:16.839963 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7c18d34-96eb-4131-b058-559d3f243433-host\") pod \"a7c18d34-96eb-4131-b058-559d3f243433\" (UID: \"a7c18d34-96eb-4131-b058-559d3f243433\") " Oct 02 20:09:16 crc kubenswrapper[4832]: I1002 20:09:16.840375 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmttl\" (UniqueName: \"kubernetes.io/projected/a7c18d34-96eb-4131-b058-559d3f243433-kube-api-access-hmttl\") pod \"a7c18d34-96eb-4131-b058-559d3f243433\" (UID: \"a7c18d34-96eb-4131-b058-559d3f243433\") " Oct 02 20:09:16 crc kubenswrapper[4832]: I1002 20:09:16.840007 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7c18d34-96eb-4131-b058-559d3f243433-host" (OuterVolumeSpecName: "host") pod "a7c18d34-96eb-4131-b058-559d3f243433" (UID: "a7c18d34-96eb-4131-b058-559d3f243433"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 02 20:09:16 crc kubenswrapper[4832]: I1002 20:09:16.841190 4832 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a7c18d34-96eb-4131-b058-559d3f243433-host\") on node \"crc\" DevicePath \"\"" Oct 02 20:09:16 crc kubenswrapper[4832]: I1002 20:09:16.846722 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c18d34-96eb-4131-b058-559d3f243433-kube-api-access-hmttl" (OuterVolumeSpecName: "kube-api-access-hmttl") pod "a7c18d34-96eb-4131-b058-559d3f243433" (UID: "a7c18d34-96eb-4131-b058-559d3f243433"). InnerVolumeSpecName "kube-api-access-hmttl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:09:16 crc kubenswrapper[4832]: I1002 20:09:16.943995 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmttl\" (UniqueName: \"kubernetes.io/projected/a7c18d34-96eb-4131-b058-559d3f243433-kube-api-access-hmttl\") on node \"crc\" DevicePath \"\"" Oct 02 20:09:17 crc kubenswrapper[4832]: I1002 20:09:17.234681 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c18d34-96eb-4131-b058-559d3f243433" path="/var/lib/kubelet/pods/a7c18d34-96eb-4131-b058-559d3f243433/volumes" Oct 02 20:09:17 crc kubenswrapper[4832]: I1002 20:09:17.387711 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77_cdf31730-cdc9-4eca-980c-2189c50917b2/util/0.log" Oct 02 20:09:17 crc kubenswrapper[4832]: I1002 20:09:17.577994 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77_cdf31730-cdc9-4eca-980c-2189c50917b2/util/0.log" Oct 02 20:09:17 crc kubenswrapper[4832]: I1002 20:09:17.602521 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77_cdf31730-cdc9-4eca-980c-2189c50917b2/pull/0.log" Oct 02 20:09:17 crc kubenswrapper[4832]: I1002 20:09:17.614110 4832 scope.go:117] "RemoveContainer" containerID="af669627ff7cafbfb0f25697f73254d25cf576f3ff5eb83df6145d0b5912f841" Oct 02 20:09:17 crc kubenswrapper[4832]: I1002 20:09:17.614160 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/crc-debug-8p2cw" Oct 02 20:09:17 crc kubenswrapper[4832]: I1002 20:09:17.718114 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77_cdf31730-cdc9-4eca-980c-2189c50917b2/pull/0.log" Oct 02 20:09:17 crc kubenswrapper[4832]: I1002 20:09:17.901491 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77_cdf31730-cdc9-4eca-980c-2189c50917b2/util/0.log" Oct 02 20:09:17 crc kubenswrapper[4832]: I1002 20:09:17.937271 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77_cdf31730-cdc9-4eca-980c-2189c50917b2/extract/0.log" Oct 02 20:09:18 crc kubenswrapper[4832]: I1002 20:09:18.014824 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_393b9ba3590b2be8fa56ff3698eeba17e90ce9494dba409a4ddd8437f5qkj77_cdf31730-cdc9-4eca-980c-2189c50917b2/pull/0.log" Oct 02 20:09:18 crc kubenswrapper[4832]: I1002 20:09:18.092787 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6c675fb79f-szf5c_2cca84e4-3eb8-41c8-95db-f5b755e83758/kube-rbac-proxy/0.log" Oct 02 20:09:18 crc kubenswrapper[4832]: I1002 20:09:18.179651 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6c675fb79f-szf5c_2cca84e4-3eb8-41c8-95db-f5b755e83758/manager/0.log" Oct 02 20:09:18 crc kubenswrapper[4832]: I1002 20:09:18.220837 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79d68d6c85-98l5q_3a910552-db07-45ab-9f11-5b5051a1d070/kube-rbac-proxy/0.log" Oct 02 20:09:18 crc kubenswrapper[4832]: I1002 20:09:18.356585 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79d68d6c85-98l5q_3a910552-db07-45ab-9f11-5b5051a1d070/manager/0.log" Oct 02 20:09:18 crc kubenswrapper[4832]: I1002 20:09:18.410663 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-sqc96_d06661d7-5a41-4954-bfe4-8d25a9aa49d1/kube-rbac-proxy/0.log" Oct 02 20:09:18 crc kubenswrapper[4832]: I1002 20:09:18.480430 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-75dfd9b554-sqc96_d06661d7-5a41-4954-bfe4-8d25a9aa49d1/manager/0.log" Oct 02 20:09:18 crc kubenswrapper[4832]: I1002 20:09:18.621486 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-846dff85b5-9pnvm_0fa21051-6127-497b-a7dc-f4156314397e/kube-rbac-proxy/0.log" Oct 02 20:09:18 crc kubenswrapper[4832]: I1002 20:09:18.687503 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-846dff85b5-9pnvm_0fa21051-6127-497b-a7dc-f4156314397e/manager/0.log" Oct 02 20:09:18 crc kubenswrapper[4832]: I1002 20:09:18.780279 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-599898f689-rnsmf_7a36740f-eefd-4d9d-afe2-491d02a75fa6/kube-rbac-proxy/0.log" Oct 02 20:09:18 crc kubenswrapper[4832]: I1002 20:09:18.881078 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-599898f689-rnsmf_7a36740f-eefd-4d9d-afe2-491d02a75fa6/manager/0.log" Oct 02 20:09:19 crc kubenswrapper[4832]: I1002 20:09:19.025241 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6769b867d9-zsnms_b66dc2a1-b115-4952-99ce-866046ca9ea5/kube-rbac-proxy/0.log" Oct 02 20:09:19 crc kubenswrapper[4832]: I1002 20:09:19.025744 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6769b867d9-zsnms_b66dc2a1-b115-4952-99ce-866046ca9ea5/manager/0.log" Oct 02 20:09:19 crc kubenswrapper[4832]: I1002 20:09:19.179238 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5fbf469cd7-262c9_0835997a-eef2-4744-a6ed-dce8714f62f7/kube-rbac-proxy/0.log" Oct 02 20:09:19 crc kubenswrapper[4832]: I1002 20:09:19.358044 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5fbf469cd7-262c9_0835997a-eef2-4744-a6ed-dce8714f62f7/manager/0.log" Oct 02 20:09:19 crc kubenswrapper[4832]: I1002 20:09:19.431734 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-84bc9db6cc-xf6p9_93650652-02f0-403d-a9e6-6a71feb797c6/kube-rbac-proxy/0.log" Oct 02 20:09:19 crc kubenswrapper[4832]: I1002 20:09:19.462745 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-84bc9db6cc-xf6p9_93650652-02f0-403d-a9e6-6a71feb797c6/manager/0.log" Oct 02 20:09:19 crc kubenswrapper[4832]: I1002 20:09:19.614961 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7f55849f88-fv5h5_b2e208d4-436d-4e17-b4b3-165b130164c7/kube-rbac-proxy/0.log" Oct 02 20:09:19 crc kubenswrapper[4832]: I1002 20:09:19.725030 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7f55849f88-fv5h5_b2e208d4-436d-4e17-b4b3-165b130164c7/manager/0.log" Oct 02 20:09:19 crc kubenswrapper[4832]: I1002 20:09:19.865807 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6fd6854b49-ts5sn_5ae05766-702f-4f1d-a149-a01663fd2b53/kube-rbac-proxy/0.log" Oct 02 20:09:19 crc kubenswrapper[4832]: I1002 20:09:19.919019 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6fd6854b49-ts5sn_5ae05766-702f-4f1d-a149-a01663fd2b53/manager/0.log" Oct 02 20:09:19 crc kubenswrapper[4832]: I1002 20:09:19.944564 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5c468bf4d4-nb6x7_6049f0ba-16e6-4773-bc16-d26b6e04364e/kube-rbac-proxy/0.log" Oct 02 20:09:20 crc kubenswrapper[4832]: I1002 20:09:20.079787 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5c468bf4d4-nb6x7_6049f0ba-16e6-4773-bc16-d26b6e04364e/manager/0.log" Oct 02 20:09:20 crc kubenswrapper[4832]: I1002 20:09:20.141739 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6574bf987d-79nms_36d178a5-f367-4534-ab1e-54c162ce2961/kube-rbac-proxy/0.log" Oct 02 20:09:20 crc kubenswrapper[4832]: I1002 20:09:20.262336 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6574bf987d-79nms_36d178a5-f367-4534-ab1e-54c162ce2961/manager/0.log" Oct 02 20:09:20 crc kubenswrapper[4832]: I1002 20:09:20.541513 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-555c7456bd-v8gc6_60b0fee3-0856-4087-ad87-0a4847e3613c/kube-rbac-proxy/0.log" Oct 02 20:09:20 crc kubenswrapper[4832]: I1002 20:09:20.592608 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-555c7456bd-v8gc6_60b0fee3-0856-4087-ad87-0a4847e3613c/manager/0.log" Oct 02 20:09:20 crc kubenswrapper[4832]: I1002 20:09:20.680882 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-59d6cfdf45-tsswz_655a4d07-4b1f-420e-b676-8e5094960f64/kube-rbac-proxy/0.log" Oct 02 20:09:20 crc kubenswrapper[4832]: I1002 20:09:20.760333 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-59d6cfdf45-tsswz_655a4d07-4b1f-420e-b676-8e5094960f64/manager/0.log" Oct 02 20:09:20 crc kubenswrapper[4832]: I1002 20:09:20.837598 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr_30502d18-201c-4133-b25a-7b1e96ce21cf/kube-rbac-proxy/0.log" Oct 02 20:09:20 crc kubenswrapper[4832]: I1002 20:09:20.841953 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6f64c4d6789fgdr_30502d18-201c-4133-b25a-7b1e96ce21cf/manager/0.log" Oct 02 20:09:20 crc kubenswrapper[4832]: I1002 20:09:20.948373 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7bffff79d9-sj2rb_47968f19-fabf-423c-9cf6-1d8b57654e3f/kube-rbac-proxy/0.log" Oct 02 20:09:21 crc kubenswrapper[4832]: I1002 20:09:21.097806 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7c58d4ffff-bz6pp_37a13ba7-6567-4720-9a8d-ce1c3420bfb2/kube-rbac-proxy/0.log" Oct 02 20:09:21 crc kubenswrapper[4832]: I1002 20:09:21.295104 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7c58d4ffff-bz6pp_37a13ba7-6567-4720-9a8d-ce1c3420bfb2/operator/0.log" Oct 02 20:09:21 crc kubenswrapper[4832]: I1002 20:09:21.384679 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rqvl5_791e5e7f-81c9-4e84-baa4-d1f1f752ed7b/registry-server/0.log" Oct 02 20:09:21 crc kubenswrapper[4832]: I1002 20:09:21.541842 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-688db7b6c7-p9wql_8b694594-41bd-4e62-a202-951f85430ff6/kube-rbac-proxy/0.log" Oct 02 20:09:21 crc kubenswrapper[4832]: I1002 20:09:21.662548 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-7d8bb7f44c-vqrd9_e8179b13-12b7-492d-bc86-f5543cfcbfbb/kube-rbac-proxy/0.log" Oct 02 20:09:21 crc kubenswrapper[4832]: I1002 20:09:21.668494 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-688db7b6c7-p9wql_8b694594-41bd-4e62-a202-951f85430ff6/manager/0.log" Oct 02 20:09:21 crc kubenswrapper[4832]: I1002 20:09:21.841545 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-7d8bb7f44c-vqrd9_e8179b13-12b7-492d-bc86-f5543cfcbfbb/manager/0.log" Oct 02 20:09:21 crc kubenswrapper[4832]: I1002 20:09:21.899611 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-59mvq_6ad88169-8b29-4078-90a5-759d2cb18325/operator/0.log" Oct 02 20:09:22 crc kubenswrapper[4832]: I1002 20:09:22.071633 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-97td6_2ac2d023-64bc-4653-a8eb-2dd5ed49313c/kube-rbac-proxy/0.log" Oct 02 20:09:22 crc kubenswrapper[4832]: I1002 20:09:22.128724 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6859f9b676-97td6_2ac2d023-64bc-4653-a8eb-2dd5ed49313c/manager/0.log" Oct 02 20:09:22 crc kubenswrapper[4832]: I1002 20:09:22.218890 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-769bf6645d-wj4tb_e6f36bc2-bb15-47f7-9881-05f35c2c513c/kube-rbac-proxy/0.log" Oct 02 20:09:22 crc kubenswrapper[4832]: I1002 20:09:22.414067 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-frks4_13334024-dee1-47bd-aebe-22df02b93ea0/manager/0.log" Oct 02 20:09:22 crc kubenswrapper[4832]: I1002 20:09:22.442585 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd5cb47d7-frks4_13334024-dee1-47bd-aebe-22df02b93ea0/kube-rbac-proxy/0.log" Oct 02 20:09:22 crc kubenswrapper[4832]: I1002 20:09:22.638900 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-fcd7d9895-plmxx_45bbf7cb-04fb-4076-af85-0cecd610a929/kube-rbac-proxy/0.log" Oct 02 20:09:22 crc kubenswrapper[4832]: I1002 20:09:22.640934 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7bffff79d9-sj2rb_47968f19-fabf-423c-9cf6-1d8b57654e3f/manager/0.log" Oct 02 20:09:22 crc kubenswrapper[4832]: I1002 20:09:22.742477 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-769bf6645d-wj4tb_e6f36bc2-bb15-47f7-9881-05f35c2c513c/manager/0.log" Oct 02 20:09:22 crc kubenswrapper[4832]: I1002 20:09:22.750233 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-fcd7d9895-plmxx_45bbf7cb-04fb-4076-af85-0cecd610a929/manager/0.log" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.541169 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w94f8"] Oct 02 20:09:26 crc kubenswrapper[4832]: E1002 20:09:26.542378 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c18d34-96eb-4131-b058-559d3f243433" containerName="container-00" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.542397 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c18d34-96eb-4131-b058-559d3f243433" containerName="container-00" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.542748 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c18d34-96eb-4131-b058-559d3f243433" containerName="container-00" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.544895 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.560822 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w94f8"] Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.667875 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/790fbb24-6114-41ac-ac66-2672fb00d091-catalog-content\") pod \"redhat-operators-w94f8\" (UID: \"790fbb24-6114-41ac-ac66-2672fb00d091\") " pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.668162 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfkfm\" (UniqueName: \"kubernetes.io/projected/790fbb24-6114-41ac-ac66-2672fb00d091-kube-api-access-tfkfm\") pod \"redhat-operators-w94f8\" (UID: \"790fbb24-6114-41ac-ac66-2672fb00d091\") " pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.668432 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/790fbb24-6114-41ac-ac66-2672fb00d091-utilities\") pod \"redhat-operators-w94f8\" (UID: \"790fbb24-6114-41ac-ac66-2672fb00d091\") " pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.770919 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/790fbb24-6114-41ac-ac66-2672fb00d091-catalog-content\") pod \"redhat-operators-w94f8\" (UID: \"790fbb24-6114-41ac-ac66-2672fb00d091\") " pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.771023 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfkfm\" (UniqueName: \"kubernetes.io/projected/790fbb24-6114-41ac-ac66-2672fb00d091-kube-api-access-tfkfm\") pod \"redhat-operators-w94f8\" (UID: \"790fbb24-6114-41ac-ac66-2672fb00d091\") " pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.771132 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/790fbb24-6114-41ac-ac66-2672fb00d091-utilities\") pod \"redhat-operators-w94f8\" (UID: \"790fbb24-6114-41ac-ac66-2672fb00d091\") " pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.771704 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/790fbb24-6114-41ac-ac66-2672fb00d091-utilities\") pod \"redhat-operators-w94f8\" (UID: \"790fbb24-6114-41ac-ac66-2672fb00d091\") " pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.771770 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/790fbb24-6114-41ac-ac66-2672fb00d091-catalog-content\") pod \"redhat-operators-w94f8\" (UID: \"790fbb24-6114-41ac-ac66-2672fb00d091\") " pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.794157 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfkfm\" (UniqueName: \"kubernetes.io/projected/790fbb24-6114-41ac-ac66-2672fb00d091-kube-api-access-tfkfm\") pod \"redhat-operators-w94f8\" (UID: \"790fbb24-6114-41ac-ac66-2672fb00d091\") " pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.867968 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.876701 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.876760 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.876807 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.881063 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ef522b7b58d76f3efc9401215fda40f442def9ce3c07d5450ebdc9abcdb576e"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 20:09:26 crc kubenswrapper[4832]: I1002 20:09:26.881176 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://7ef522b7b58d76f3efc9401215fda40f442def9ce3c07d5450ebdc9abcdb576e" gracePeriod=600 Oct 02 20:09:27 crc kubenswrapper[4832]: I1002 20:09:27.766384 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="7ef522b7b58d76f3efc9401215fda40f442def9ce3c07d5450ebdc9abcdb576e" exitCode=0 Oct 02 20:09:27 crc kubenswrapper[4832]: I1002 20:09:27.766470 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"7ef522b7b58d76f3efc9401215fda40f442def9ce3c07d5450ebdc9abcdb576e"} Oct 02 20:09:27 crc kubenswrapper[4832]: I1002 20:09:27.766780 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerStarted","Data":"811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c"} Oct 02 20:09:27 crc kubenswrapper[4832]: I1002 20:09:27.766801 4832 scope.go:117] "RemoveContainer" containerID="5106e94f7654b9e5f8ac8e10fb0f00a38b57ce0bf750100093ae412779b9cf32" Oct 02 20:09:28 crc kubenswrapper[4832]: W1002 20:09:28.103632 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod790fbb24_6114_41ac_ac66_2672fb00d091.slice/crio-230d443f53e3f12e3ffe3d58cdb2a4f9a441ea4ed95eda28fa33c0430c87d913 WatchSource:0}: Error finding container 230d443f53e3f12e3ffe3d58cdb2a4f9a441ea4ed95eda28fa33c0430c87d913: Status 404 returned error can't find the container with id 230d443f53e3f12e3ffe3d58cdb2a4f9a441ea4ed95eda28fa33c0430c87d913 Oct 02 20:09:28 crc kubenswrapper[4832]: I1002 20:09:28.118134 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w94f8"] Oct 02 20:09:28 crc kubenswrapper[4832]: I1002 20:09:28.783794 4832 generic.go:334] "Generic (PLEG): container finished" podID="790fbb24-6114-41ac-ac66-2672fb00d091" containerID="e5297d395fdd00b22fb96ef1ab44944d4aad510abe1490887a17dcb9e532d206" exitCode=0 Oct 02 20:09:28 crc kubenswrapper[4832]: I1002 20:09:28.783938 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w94f8" event={"ID":"790fbb24-6114-41ac-ac66-2672fb00d091","Type":"ContainerDied","Data":"e5297d395fdd00b22fb96ef1ab44944d4aad510abe1490887a17dcb9e532d206"} Oct 02 20:09:28 crc kubenswrapper[4832]: I1002 20:09:28.784510 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w94f8" event={"ID":"790fbb24-6114-41ac-ac66-2672fb00d091","Type":"ContainerStarted","Data":"230d443f53e3f12e3ffe3d58cdb2a4f9a441ea4ed95eda28fa33c0430c87d913"} Oct 02 20:09:28 crc kubenswrapper[4832]: I1002 20:09:28.788591 4832 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 02 20:09:30 crc kubenswrapper[4832]: I1002 20:09:30.821217 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w94f8" event={"ID":"790fbb24-6114-41ac-ac66-2672fb00d091","Type":"ContainerStarted","Data":"2f20f4ff7800a0530631170b4b1804500b5a674ea36054a0a904f8d73c1e9865"} Oct 02 20:09:33 crc kubenswrapper[4832]: I1002 20:09:33.860229 4832 generic.go:334] "Generic (PLEG): container finished" podID="790fbb24-6114-41ac-ac66-2672fb00d091" containerID="2f20f4ff7800a0530631170b4b1804500b5a674ea36054a0a904f8d73c1e9865" exitCode=0 Oct 02 20:09:33 crc kubenswrapper[4832]: I1002 20:09:33.860397 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w94f8" event={"ID":"790fbb24-6114-41ac-ac66-2672fb00d091","Type":"ContainerDied","Data":"2f20f4ff7800a0530631170b4b1804500b5a674ea36054a0a904f8d73c1e9865"} Oct 02 20:09:34 crc kubenswrapper[4832]: I1002 20:09:34.885615 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w94f8" event={"ID":"790fbb24-6114-41ac-ac66-2672fb00d091","Type":"ContainerStarted","Data":"891255acd6d24971770cf6f8eea59147cc488a05c5acc82cae8221521849e308"} Oct 02 20:09:34 crc kubenswrapper[4832]: I1002 20:09:34.925658 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w94f8" podStartSLOduration=3.435029481 podStartE2EDuration="8.925070882s" podCreationTimestamp="2025-10-02 20:09:26 +0000 UTC" firstStartedPulling="2025-10-02 20:09:28.786364215 +0000 UTC m=+6525.755807107" lastFinishedPulling="2025-10-02 20:09:34.276405636 +0000 UTC m=+6531.245848508" observedRunningTime="2025-10-02 20:09:34.910225651 +0000 UTC m=+6531.879668523" watchObservedRunningTime="2025-10-02 20:09:34.925070882 +0000 UTC m=+6531.894513754" Oct 02 20:09:36 crc kubenswrapper[4832]: I1002 20:09:36.868092 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:09:36 crc kubenswrapper[4832]: I1002 20:09:36.868880 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:09:37 crc kubenswrapper[4832]: I1002 20:09:37.932139 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w94f8" podUID="790fbb24-6114-41ac-ac66-2672fb00d091" containerName="registry-server" probeResult="failure" output=< Oct 02 20:09:37 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 20:09:37 crc kubenswrapper[4832]: > Oct 02 20:09:40 crc kubenswrapper[4832]: I1002 20:09:40.698271 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-nqc2q_5e0373b9-fb7e-4de3-adc4-c8a9c58a72ce/control-plane-machine-set-operator/0.log" Oct 02 20:09:40 crc kubenswrapper[4832]: I1002 20:09:40.874436 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8fkn8_34d934c3-20c9-4091-844a-e4db7482d8e0/machine-api-operator/0.log" Oct 02 20:09:40 crc kubenswrapper[4832]: I1002 20:09:40.884742 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8fkn8_34d934c3-20c9-4091-844a-e4db7482d8e0/kube-rbac-proxy/0.log" Oct 02 20:09:47 crc kubenswrapper[4832]: I1002 20:09:47.922394 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w94f8" podUID="790fbb24-6114-41ac-ac66-2672fb00d091" containerName="registry-server" probeResult="failure" output=< Oct 02 20:09:47 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 20:09:47 crc kubenswrapper[4832]: > Oct 02 20:09:53 crc kubenswrapper[4832]: I1002 20:09:53.950685 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-mm5th_e3727292-356b-4969-bcb2-c57587cbf4a4/cert-manager-controller/0.log" Oct 02 20:09:54 crc kubenswrapper[4832]: I1002 20:09:54.071997 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-gmljf_4fa1e1e0-e670-4a90-9051-76f8448e9a9f/cert-manager-cainjector/0.log" Oct 02 20:09:54 crc kubenswrapper[4832]: I1002 20:09:54.120413 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-ml6fs_c98ef57c-5a9d-4947-a0e6-3658c7f54073/cert-manager-webhook/0.log" Oct 02 20:09:57 crc kubenswrapper[4832]: I1002 20:09:57.940863 4832 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w94f8" podUID="790fbb24-6114-41ac-ac66-2672fb00d091" containerName="registry-server" probeResult="failure" output=< Oct 02 20:09:57 crc kubenswrapper[4832]: timeout: failed to connect service ":50051" within 1s Oct 02 20:09:57 crc kubenswrapper[4832]: > Oct 02 20:10:06 crc kubenswrapper[4832]: I1002 20:10:06.922926 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:10:06 crc kubenswrapper[4832]: I1002 20:10:06.987631 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:10:07 crc kubenswrapper[4832]: I1002 20:10:07.173002 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w94f8"] Oct 02 20:10:07 crc kubenswrapper[4832]: I1002 20:10:07.680881 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-mklc7_0c339642-1f25-4795-a62a-2db5045984cb/nmstate-console-plugin/0.log" Oct 02 20:10:07 crc kubenswrapper[4832]: I1002 20:10:07.836476 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-g62zg_477c57db-3df8-4587-abf7-ef94e8c4ad69/nmstate-handler/0.log" Oct 02 20:10:07 crc kubenswrapper[4832]: I1002 20:10:07.911071 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-vldj4_679a35a4-780b-431c-bb41-37763bf32d80/kube-rbac-proxy/0.log" Oct 02 20:10:08 crc kubenswrapper[4832]: I1002 20:10:08.240959 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-vldj4_679a35a4-780b-431c-bb41-37763bf32d80/nmstate-metrics/0.log" Oct 02 20:10:08 crc kubenswrapper[4832]: I1002 20:10:08.283565 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w94f8" podUID="790fbb24-6114-41ac-ac66-2672fb00d091" containerName="registry-server" containerID="cri-o://891255acd6d24971770cf6f8eea59147cc488a05c5acc82cae8221521849e308" gracePeriod=2 Oct 02 20:10:08 crc kubenswrapper[4832]: I1002 20:10:08.310390 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-llnvs_c2e56bcb-1ff3-4c1f-8353-b84a573d23b2/nmstate-operator/0.log" Oct 02 20:10:08 crc kubenswrapper[4832]: I1002 20:10:08.463673 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-wp5dm_d6480df8-e541-45f9-b397-d6abe2be00d3/nmstate-webhook/0.log" Oct 02 20:10:08 crc kubenswrapper[4832]: I1002 20:10:08.885950 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:10:08 crc kubenswrapper[4832]: I1002 20:10:08.968202 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfkfm\" (UniqueName: \"kubernetes.io/projected/790fbb24-6114-41ac-ac66-2672fb00d091-kube-api-access-tfkfm\") pod \"790fbb24-6114-41ac-ac66-2672fb00d091\" (UID: \"790fbb24-6114-41ac-ac66-2672fb00d091\") " Oct 02 20:10:08 crc kubenswrapper[4832]: I1002 20:10:08.968349 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/790fbb24-6114-41ac-ac66-2672fb00d091-catalog-content\") pod \"790fbb24-6114-41ac-ac66-2672fb00d091\" (UID: \"790fbb24-6114-41ac-ac66-2672fb00d091\") " Oct 02 20:10:08 crc kubenswrapper[4832]: I1002 20:10:08.968373 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/790fbb24-6114-41ac-ac66-2672fb00d091-utilities\") pod \"790fbb24-6114-41ac-ac66-2672fb00d091\" (UID: \"790fbb24-6114-41ac-ac66-2672fb00d091\") " Oct 02 20:10:08 crc kubenswrapper[4832]: I1002 20:10:08.969399 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/790fbb24-6114-41ac-ac66-2672fb00d091-utilities" (OuterVolumeSpecName: "utilities") pod "790fbb24-6114-41ac-ac66-2672fb00d091" (UID: "790fbb24-6114-41ac-ac66-2672fb00d091"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.005552 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790fbb24-6114-41ac-ac66-2672fb00d091-kube-api-access-tfkfm" (OuterVolumeSpecName: "kube-api-access-tfkfm") pod "790fbb24-6114-41ac-ac66-2672fb00d091" (UID: "790fbb24-6114-41ac-ac66-2672fb00d091"). InnerVolumeSpecName "kube-api-access-tfkfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.070597 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/790fbb24-6114-41ac-ac66-2672fb00d091-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "790fbb24-6114-41ac-ac66-2672fb00d091" (UID: "790fbb24-6114-41ac-ac66-2672fb00d091"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.072003 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/790fbb24-6114-41ac-ac66-2672fb00d091-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.072042 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/790fbb24-6114-41ac-ac66-2672fb00d091-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.072056 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfkfm\" (UniqueName: \"kubernetes.io/projected/790fbb24-6114-41ac-ac66-2672fb00d091-kube-api-access-tfkfm\") on node \"crc\" DevicePath \"\"" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.313621 4832 generic.go:334] "Generic (PLEG): container finished" podID="790fbb24-6114-41ac-ac66-2672fb00d091" containerID="891255acd6d24971770cf6f8eea59147cc488a05c5acc82cae8221521849e308" exitCode=0 Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.313685 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w94f8" event={"ID":"790fbb24-6114-41ac-ac66-2672fb00d091","Type":"ContainerDied","Data":"891255acd6d24971770cf6f8eea59147cc488a05c5acc82cae8221521849e308"} Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.313761 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w94f8" event={"ID":"790fbb24-6114-41ac-ac66-2672fb00d091","Type":"ContainerDied","Data":"230d443f53e3f12e3ffe3d58cdb2a4f9a441ea4ed95eda28fa33c0430c87d913"} Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.313787 4832 scope.go:117] "RemoveContainer" containerID="891255acd6d24971770cf6f8eea59147cc488a05c5acc82cae8221521849e308" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.314115 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w94f8" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.350854 4832 scope.go:117] "RemoveContainer" containerID="2f20f4ff7800a0530631170b4b1804500b5a674ea36054a0a904f8d73c1e9865" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.358393 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w94f8"] Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.373671 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w94f8"] Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.376422 4832 scope.go:117] "RemoveContainer" containerID="e5297d395fdd00b22fb96ef1ab44944d4aad510abe1490887a17dcb9e532d206" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.435657 4832 scope.go:117] "RemoveContainer" containerID="891255acd6d24971770cf6f8eea59147cc488a05c5acc82cae8221521849e308" Oct 02 20:10:09 crc kubenswrapper[4832]: E1002 20:10:09.436323 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891255acd6d24971770cf6f8eea59147cc488a05c5acc82cae8221521849e308\": container with ID starting with 891255acd6d24971770cf6f8eea59147cc488a05c5acc82cae8221521849e308 not found: ID does not exist" containerID="891255acd6d24971770cf6f8eea59147cc488a05c5acc82cae8221521849e308" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.436450 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891255acd6d24971770cf6f8eea59147cc488a05c5acc82cae8221521849e308"} err="failed to get container status \"891255acd6d24971770cf6f8eea59147cc488a05c5acc82cae8221521849e308\": rpc error: code = NotFound desc = could not find container \"891255acd6d24971770cf6f8eea59147cc488a05c5acc82cae8221521849e308\": container with ID starting with 891255acd6d24971770cf6f8eea59147cc488a05c5acc82cae8221521849e308 not found: ID does not exist" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.436611 4832 scope.go:117] "RemoveContainer" containerID="2f20f4ff7800a0530631170b4b1804500b5a674ea36054a0a904f8d73c1e9865" Oct 02 20:10:09 crc kubenswrapper[4832]: E1002 20:10:09.436978 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f20f4ff7800a0530631170b4b1804500b5a674ea36054a0a904f8d73c1e9865\": container with ID starting with 2f20f4ff7800a0530631170b4b1804500b5a674ea36054a0a904f8d73c1e9865 not found: ID does not exist" containerID="2f20f4ff7800a0530631170b4b1804500b5a674ea36054a0a904f8d73c1e9865" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.437025 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f20f4ff7800a0530631170b4b1804500b5a674ea36054a0a904f8d73c1e9865"} err="failed to get container status \"2f20f4ff7800a0530631170b4b1804500b5a674ea36054a0a904f8d73c1e9865\": rpc error: code = NotFound desc = could not find container \"2f20f4ff7800a0530631170b4b1804500b5a674ea36054a0a904f8d73c1e9865\": container with ID starting with 2f20f4ff7800a0530631170b4b1804500b5a674ea36054a0a904f8d73c1e9865 not found: ID does not exist" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.437054 4832 scope.go:117] "RemoveContainer" containerID="e5297d395fdd00b22fb96ef1ab44944d4aad510abe1490887a17dcb9e532d206" Oct 02 20:10:09 crc kubenswrapper[4832]: E1002 20:10:09.437376 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5297d395fdd00b22fb96ef1ab44944d4aad510abe1490887a17dcb9e532d206\": container with ID starting with e5297d395fdd00b22fb96ef1ab44944d4aad510abe1490887a17dcb9e532d206 not found: ID does not exist" containerID="e5297d395fdd00b22fb96ef1ab44944d4aad510abe1490887a17dcb9e532d206" Oct 02 20:10:09 crc kubenswrapper[4832]: I1002 20:10:09.437404 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5297d395fdd00b22fb96ef1ab44944d4aad510abe1490887a17dcb9e532d206"} err="failed to get container status \"e5297d395fdd00b22fb96ef1ab44944d4aad510abe1490887a17dcb9e532d206\": rpc error: code = NotFound desc = could not find container \"e5297d395fdd00b22fb96ef1ab44944d4aad510abe1490887a17dcb9e532d206\": container with ID starting with e5297d395fdd00b22fb96ef1ab44944d4aad510abe1490887a17dcb9e532d206 not found: ID does not exist" Oct 02 20:10:11 crc kubenswrapper[4832]: I1002 20:10:11.239371 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="790fbb24-6114-41ac-ac66-2672fb00d091" path="/var/lib/kubelet/pods/790fbb24-6114-41ac-ac66-2672fb00d091/volumes" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.110891 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d28nt"] Oct 02 20:10:16 crc kubenswrapper[4832]: E1002 20:10:16.114203 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790fbb24-6114-41ac-ac66-2672fb00d091" containerName="registry-server" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.114238 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="790fbb24-6114-41ac-ac66-2672fb00d091" containerName="registry-server" Oct 02 20:10:16 crc kubenswrapper[4832]: E1002 20:10:16.114326 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790fbb24-6114-41ac-ac66-2672fb00d091" containerName="extract-utilities" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.114336 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="790fbb24-6114-41ac-ac66-2672fb00d091" containerName="extract-utilities" Oct 02 20:10:16 crc kubenswrapper[4832]: E1002 20:10:16.114366 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790fbb24-6114-41ac-ac66-2672fb00d091" containerName="extract-content" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.114373 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="790fbb24-6114-41ac-ac66-2672fb00d091" containerName="extract-content" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.114774 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="790fbb24-6114-41ac-ac66-2672fb00d091" containerName="registry-server" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.118281 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.125816 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d28nt"] Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.277030 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62fw9\" (UniqueName: \"kubernetes.io/projected/92c34108-31f1-4c07-b284-33ddb91a2c9b-kube-api-access-62fw9\") pod \"certified-operators-d28nt\" (UID: \"92c34108-31f1-4c07-b284-33ddb91a2c9b\") " pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.277088 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c34108-31f1-4c07-b284-33ddb91a2c9b-utilities\") pod \"certified-operators-d28nt\" (UID: \"92c34108-31f1-4c07-b284-33ddb91a2c9b\") " pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.277603 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c34108-31f1-4c07-b284-33ddb91a2c9b-catalog-content\") pod \"certified-operators-d28nt\" (UID: \"92c34108-31f1-4c07-b284-33ddb91a2c9b\") " pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.380478 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62fw9\" (UniqueName: \"kubernetes.io/projected/92c34108-31f1-4c07-b284-33ddb91a2c9b-kube-api-access-62fw9\") pod \"certified-operators-d28nt\" (UID: \"92c34108-31f1-4c07-b284-33ddb91a2c9b\") " pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.380563 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c34108-31f1-4c07-b284-33ddb91a2c9b-utilities\") pod \"certified-operators-d28nt\" (UID: \"92c34108-31f1-4c07-b284-33ddb91a2c9b\") " pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.380901 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c34108-31f1-4c07-b284-33ddb91a2c9b-catalog-content\") pod \"certified-operators-d28nt\" (UID: \"92c34108-31f1-4c07-b284-33ddb91a2c9b\") " pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.382579 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c34108-31f1-4c07-b284-33ddb91a2c9b-utilities\") pod \"certified-operators-d28nt\" (UID: \"92c34108-31f1-4c07-b284-33ddb91a2c9b\") " pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.383139 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c34108-31f1-4c07-b284-33ddb91a2c9b-catalog-content\") pod \"certified-operators-d28nt\" (UID: \"92c34108-31f1-4c07-b284-33ddb91a2c9b\") " pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.402516 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62fw9\" (UniqueName: \"kubernetes.io/projected/92c34108-31f1-4c07-b284-33ddb91a2c9b-kube-api-access-62fw9\") pod \"certified-operators-d28nt\" (UID: \"92c34108-31f1-4c07-b284-33ddb91a2c9b\") " pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.451799 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:16 crc kubenswrapper[4832]: I1002 20:10:16.936036 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d28nt"] Oct 02 20:10:17 crc kubenswrapper[4832]: I1002 20:10:17.400078 4832 generic.go:334] "Generic (PLEG): container finished" podID="92c34108-31f1-4c07-b284-33ddb91a2c9b" containerID="a121c1b94773fe6add20b671eaa7de178ed1d442f715d559db0b153b6947169c" exitCode=0 Oct 02 20:10:17 crc kubenswrapper[4832]: I1002 20:10:17.400122 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d28nt" event={"ID":"92c34108-31f1-4c07-b284-33ddb91a2c9b","Type":"ContainerDied","Data":"a121c1b94773fe6add20b671eaa7de178ed1d442f715d559db0b153b6947169c"} Oct 02 20:10:17 crc kubenswrapper[4832]: I1002 20:10:17.400147 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d28nt" event={"ID":"92c34108-31f1-4c07-b284-33ddb91a2c9b","Type":"ContainerStarted","Data":"9a3e382b19fd497ab0f87163c6aca62916bb56eebd272be3244b9755c221f9fb"} Oct 02 20:10:19 crc kubenswrapper[4832]: I1002 20:10:19.423646 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d28nt" event={"ID":"92c34108-31f1-4c07-b284-33ddb91a2c9b","Type":"ContainerStarted","Data":"feac9733d3932f98ed109a6687a8d0122b43c3a9f5f44bb16b06bb13bc8d0ae6"} Oct 02 20:10:20 crc kubenswrapper[4832]: I1002 20:10:20.436756 4832 generic.go:334] "Generic (PLEG): container finished" podID="92c34108-31f1-4c07-b284-33ddb91a2c9b" containerID="feac9733d3932f98ed109a6687a8d0122b43c3a9f5f44bb16b06bb13bc8d0ae6" exitCode=0 Oct 02 20:10:20 crc kubenswrapper[4832]: I1002 20:10:20.436852 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d28nt" event={"ID":"92c34108-31f1-4c07-b284-33ddb91a2c9b","Type":"ContainerDied","Data":"feac9733d3932f98ed109a6687a8d0122b43c3a9f5f44bb16b06bb13bc8d0ae6"} Oct 02 20:10:21 crc kubenswrapper[4832]: I1002 20:10:21.091204 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69857bc6ff-kl7qp_3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a/manager/0.log" Oct 02 20:10:21 crc kubenswrapper[4832]: I1002 20:10:21.110354 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69857bc6ff-kl7qp_3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a/kube-rbac-proxy/0.log" Oct 02 20:10:21 crc kubenswrapper[4832]: I1002 20:10:21.450293 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d28nt" event={"ID":"92c34108-31f1-4c07-b284-33ddb91a2c9b","Type":"ContainerStarted","Data":"28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87"} Oct 02 20:10:21 crc kubenswrapper[4832]: I1002 20:10:21.475968 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d28nt" podStartSLOduration=1.934691016 podStartE2EDuration="5.475945681s" podCreationTimestamp="2025-10-02 20:10:16 +0000 UTC" firstStartedPulling="2025-10-02 20:10:17.402460403 +0000 UTC m=+6574.371903275" lastFinishedPulling="2025-10-02 20:10:20.943715068 +0000 UTC m=+6577.913157940" observedRunningTime="2025-10-02 20:10:21.467318984 +0000 UTC m=+6578.436761876" watchObservedRunningTime="2025-10-02 20:10:21.475945681 +0000 UTC m=+6578.445388553" Oct 02 20:10:26 crc kubenswrapper[4832]: I1002 20:10:26.452190 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:26 crc kubenswrapper[4832]: I1002 20:10:26.452784 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:26 crc kubenswrapper[4832]: I1002 20:10:26.513489 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:26 crc kubenswrapper[4832]: I1002 20:10:26.568727 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:26 crc kubenswrapper[4832]: I1002 20:10:26.754148 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d28nt"] Oct 02 20:10:28 crc kubenswrapper[4832]: I1002 20:10:28.519111 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d28nt" podUID="92c34108-31f1-4c07-b284-33ddb91a2c9b" containerName="registry-server" containerID="cri-o://28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87" gracePeriod=2 Oct 02 20:10:28 crc kubenswrapper[4832]: E1002 20:10:28.662296 4832 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92c34108_31f1_4c07_b284_33ddb91a2c9b.slice/crio-conmon-28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92c34108_31f1_4c07_b284_33ddb91a2c9b.slice/crio-28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87.scope\": RecentStats: unable to find data in memory cache]" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.103215 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.246047 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62fw9\" (UniqueName: \"kubernetes.io/projected/92c34108-31f1-4c07-b284-33ddb91a2c9b-kube-api-access-62fw9\") pod \"92c34108-31f1-4c07-b284-33ddb91a2c9b\" (UID: \"92c34108-31f1-4c07-b284-33ddb91a2c9b\") " Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.246120 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c34108-31f1-4c07-b284-33ddb91a2c9b-catalog-content\") pod \"92c34108-31f1-4c07-b284-33ddb91a2c9b\" (UID: \"92c34108-31f1-4c07-b284-33ddb91a2c9b\") " Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.246143 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c34108-31f1-4c07-b284-33ddb91a2c9b-utilities\") pod \"92c34108-31f1-4c07-b284-33ddb91a2c9b\" (UID: \"92c34108-31f1-4c07-b284-33ddb91a2c9b\") " Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.247538 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92c34108-31f1-4c07-b284-33ddb91a2c9b-utilities" (OuterVolumeSpecName: "utilities") pod "92c34108-31f1-4c07-b284-33ddb91a2c9b" (UID: "92c34108-31f1-4c07-b284-33ddb91a2c9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.252691 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c34108-31f1-4c07-b284-33ddb91a2c9b-kube-api-access-62fw9" (OuterVolumeSpecName: "kube-api-access-62fw9") pod "92c34108-31f1-4c07-b284-33ddb91a2c9b" (UID: "92c34108-31f1-4c07-b284-33ddb91a2c9b"). InnerVolumeSpecName "kube-api-access-62fw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.294963 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92c34108-31f1-4c07-b284-33ddb91a2c9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92c34108-31f1-4c07-b284-33ddb91a2c9b" (UID: "92c34108-31f1-4c07-b284-33ddb91a2c9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.349397 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62fw9\" (UniqueName: \"kubernetes.io/projected/92c34108-31f1-4c07-b284-33ddb91a2c9b-kube-api-access-62fw9\") on node \"crc\" DevicePath \"\"" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.349434 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c34108-31f1-4c07-b284-33ddb91a2c9b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.349447 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c34108-31f1-4c07-b284-33ddb91a2c9b-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.531754 4832 generic.go:334] "Generic (PLEG): container finished" podID="92c34108-31f1-4c07-b284-33ddb91a2c9b" containerID="28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87" exitCode=0 Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.531829 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d28nt" event={"ID":"92c34108-31f1-4c07-b284-33ddb91a2c9b","Type":"ContainerDied","Data":"28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87"} Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.531863 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d28nt" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.532082 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d28nt" event={"ID":"92c34108-31f1-4c07-b284-33ddb91a2c9b","Type":"ContainerDied","Data":"9a3e382b19fd497ab0f87163c6aca62916bb56eebd272be3244b9755c221f9fb"} Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.532107 4832 scope.go:117] "RemoveContainer" containerID="28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.562765 4832 scope.go:117] "RemoveContainer" containerID="feac9733d3932f98ed109a6687a8d0122b43c3a9f5f44bb16b06bb13bc8d0ae6" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.566750 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d28nt"] Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.578729 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d28nt"] Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.593363 4832 scope.go:117] "RemoveContainer" containerID="a121c1b94773fe6add20b671eaa7de178ed1d442f715d559db0b153b6947169c" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.648881 4832 scope.go:117] "RemoveContainer" containerID="28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87" Oct 02 20:10:29 crc kubenswrapper[4832]: E1002 20:10:29.649241 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87\": container with ID starting with 28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87 not found: ID does not exist" containerID="28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.649296 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87"} err="failed to get container status \"28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87\": rpc error: code = NotFound desc = could not find container \"28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87\": container with ID starting with 28877ee6a830daf17e567fae94a12ce1ee9a4ce78cae0dfc4de3e6ff9af3de87 not found: ID does not exist" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.649323 4832 scope.go:117] "RemoveContainer" containerID="feac9733d3932f98ed109a6687a8d0122b43c3a9f5f44bb16b06bb13bc8d0ae6" Oct 02 20:10:29 crc kubenswrapper[4832]: E1002 20:10:29.649909 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feac9733d3932f98ed109a6687a8d0122b43c3a9f5f44bb16b06bb13bc8d0ae6\": container with ID starting with feac9733d3932f98ed109a6687a8d0122b43c3a9f5f44bb16b06bb13bc8d0ae6 not found: ID does not exist" containerID="feac9733d3932f98ed109a6687a8d0122b43c3a9f5f44bb16b06bb13bc8d0ae6" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.649942 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feac9733d3932f98ed109a6687a8d0122b43c3a9f5f44bb16b06bb13bc8d0ae6"} err="failed to get container status \"feac9733d3932f98ed109a6687a8d0122b43c3a9f5f44bb16b06bb13bc8d0ae6\": rpc error: code = NotFound desc = could not find container \"feac9733d3932f98ed109a6687a8d0122b43c3a9f5f44bb16b06bb13bc8d0ae6\": container with ID starting with feac9733d3932f98ed109a6687a8d0122b43c3a9f5f44bb16b06bb13bc8d0ae6 not found: ID does not exist" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.649961 4832 scope.go:117] "RemoveContainer" containerID="a121c1b94773fe6add20b671eaa7de178ed1d442f715d559db0b153b6947169c" Oct 02 20:10:29 crc kubenswrapper[4832]: E1002 20:10:29.653061 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a121c1b94773fe6add20b671eaa7de178ed1d442f715d559db0b153b6947169c\": container with ID starting with a121c1b94773fe6add20b671eaa7de178ed1d442f715d559db0b153b6947169c not found: ID does not exist" containerID="a121c1b94773fe6add20b671eaa7de178ed1d442f715d559db0b153b6947169c" Oct 02 20:10:29 crc kubenswrapper[4832]: I1002 20:10:29.653104 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a121c1b94773fe6add20b671eaa7de178ed1d442f715d559db0b153b6947169c"} err="failed to get container status \"a121c1b94773fe6add20b671eaa7de178ed1d442f715d559db0b153b6947169c\": rpc error: code = NotFound desc = could not find container \"a121c1b94773fe6add20b671eaa7de178ed1d442f715d559db0b153b6947169c\": container with ID starting with a121c1b94773fe6add20b671eaa7de178ed1d442f715d559db0b153b6947169c not found: ID does not exist" Oct 02 20:10:31 crc kubenswrapper[4832]: I1002 20:10:31.236953 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c34108-31f1-4c07-b284-33ddb91a2c9b" path="/var/lib/kubelet/pods/92c34108-31f1-4c07-b284-33ddb91a2c9b/volumes" Oct 02 20:10:35 crc kubenswrapper[4832]: I1002 20:10:35.188491 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-8958c8b87-4cq9h_901f1678-8c0e-437e-bbb0-ba98d72c5aed/cluster-logging-operator/0.log" Oct 02 20:10:35 crc kubenswrapper[4832]: I1002 20:10:35.428924 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-jtbs5_07997706-bdc1-4e87-a7cb-9f5e4b85ea9c/collector/0.log" Oct 02 20:10:35 crc kubenswrapper[4832]: I1002 20:10:35.439191 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_a6a49601-8595-459e-b680-391e7b597054/loki-compactor/0.log" Oct 02 20:10:35 crc kubenswrapper[4832]: I1002 20:10:35.730512 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-6f5f7fff97-x5gwf_0ac07716-7573-4264-9530-b6dd1ea4ce14/loki-distributor/0.log" Oct 02 20:10:35 crc kubenswrapper[4832]: I1002 20:10:35.781146 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6f7dfcd5dd-w48kx_aa1f7d1a-2f01-4d70-b78c-0b28692ce57c/gateway/0.log" Oct 02 20:10:35 crc kubenswrapper[4832]: I1002 20:10:35.805058 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6f7dfcd5dd-w48kx_aa1f7d1a-2f01-4d70-b78c-0b28692ce57c/opa/0.log" Oct 02 20:10:35 crc kubenswrapper[4832]: I1002 20:10:35.977301 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6f7dfcd5dd-w4hsc_242ef9e3-c339-468a-b6a7-298dfab16a59/gateway/0.log" Oct 02 20:10:36 crc kubenswrapper[4832]: I1002 20:10:36.024531 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-6f7dfcd5dd-w4hsc_242ef9e3-c339-468a-b6a7-298dfab16a59/opa/0.log" Oct 02 20:10:36 crc kubenswrapper[4832]: I1002 20:10:36.143251 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_9c643836-b5c3-48dc-8b08-f8a5bcbea2c7/loki-index-gateway/0.log" Oct 02 20:10:36 crc kubenswrapper[4832]: I1002 20:10:36.351462 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_757a6407-20f2-4b69-816a-6b01c7e5cc79/loki-ingester/0.log" Oct 02 20:10:36 crc kubenswrapper[4832]: I1002 20:10:36.401754 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5d954896cf-g55np_64a6d330-84e2-4071-9345-a5dd8496940a/loki-querier/0.log" Oct 02 20:10:36 crc kubenswrapper[4832]: I1002 20:10:36.534245 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6fbbbc8b7d-kqxp2_45f6d92a-3f93-4a09-8ed0-74ad13440476/loki-query-frontend/0.log" Oct 02 20:10:50 crc kubenswrapper[4832]: I1002 20:10:50.879143 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-xl8tj_80714958-8954-4014-97af-c480df6a6981/kube-rbac-proxy/0.log" Oct 02 20:10:51 crc kubenswrapper[4832]: I1002 20:10:51.156982 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-frr-files/0.log" Oct 02 20:10:51 crc kubenswrapper[4832]: I1002 20:10:51.163578 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-xl8tj_80714958-8954-4014-97af-c480df6a6981/controller/0.log" Oct 02 20:10:51 crc kubenswrapper[4832]: I1002 20:10:51.420477 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-metrics/0.log" Oct 02 20:10:51 crc kubenswrapper[4832]: I1002 20:10:51.430120 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-reloader/0.log" Oct 02 20:10:51 crc kubenswrapper[4832]: I1002 20:10:51.434498 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-frr-files/0.log" Oct 02 20:10:51 crc kubenswrapper[4832]: I1002 20:10:51.440543 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-reloader/0.log" Oct 02 20:10:51 crc kubenswrapper[4832]: I1002 20:10:51.687581 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-frr-files/0.log" Oct 02 20:10:51 crc kubenswrapper[4832]: I1002 20:10:51.715694 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-metrics/0.log" Oct 02 20:10:51 crc kubenswrapper[4832]: I1002 20:10:51.722070 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-reloader/0.log" Oct 02 20:10:51 crc kubenswrapper[4832]: I1002 20:10:51.723868 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-metrics/0.log" Oct 02 20:10:51 crc kubenswrapper[4832]: I1002 20:10:51.895070 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-reloader/0.log" Oct 02 20:10:51 crc kubenswrapper[4832]: I1002 20:10:51.899856 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/controller/0.log" Oct 02 20:10:51 crc kubenswrapper[4832]: I1002 20:10:51.909847 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-metrics/0.log" Oct 02 20:10:51 crc kubenswrapper[4832]: I1002 20:10:51.919772 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/cp-frr-files/0.log" Oct 02 20:10:52 crc kubenswrapper[4832]: I1002 20:10:52.125880 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/frr-metrics/0.log" Oct 02 20:10:52 crc kubenswrapper[4832]: I1002 20:10:52.151189 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/kube-rbac-proxy/0.log" Oct 02 20:10:52 crc kubenswrapper[4832]: I1002 20:10:52.200575 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/kube-rbac-proxy-frr/0.log" Oct 02 20:10:52 crc kubenswrapper[4832]: I1002 20:10:52.414614 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/reloader/0.log" Oct 02 20:10:52 crc kubenswrapper[4832]: I1002 20:10:52.503214 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-6zhbp_ce300e6d-f2b9-47e7-a85e-6a9543a69711/frr-k8s-webhook-server/0.log" Oct 02 20:10:52 crc kubenswrapper[4832]: I1002 20:10:52.940083 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7d88c76f5f-jpcb4_964a5285-636b-4f3a-ab7d-226ff204c8f2/manager/0.log" Oct 02 20:10:53 crc kubenswrapper[4832]: I1002 20:10:53.097298 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7878588579-8s24k_94945b05-e6ed-4bb5-8dde-592b66304f50/webhook-server/0.log" Oct 02 20:10:53 crc kubenswrapper[4832]: I1002 20:10:53.418130 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-99pw7_c297584e-08f6-47c0-8acd-35bd207a9394/kube-rbac-proxy/0.log" Oct 02 20:10:53 crc kubenswrapper[4832]: I1002 20:10:53.932299 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-99pw7_c297584e-08f6-47c0-8acd-35bd207a9394/speaker/0.log" Oct 02 20:10:53 crc kubenswrapper[4832]: I1002 20:10:53.977014 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dsxph_d460a40d-97e5-460d-aaf6-927bc8707843/frr/0.log" Oct 02 20:11:07 crc kubenswrapper[4832]: I1002 20:11:07.695964 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx_5d0750e6-dd9d-4a96-97ec-97f9857702d9/util/0.log" Oct 02 20:11:07 crc kubenswrapper[4832]: I1002 20:11:07.952767 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx_5d0750e6-dd9d-4a96-97ec-97f9857702d9/pull/0.log" Oct 02 20:11:07 crc kubenswrapper[4832]: I1002 20:11:07.989121 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx_5d0750e6-dd9d-4a96-97ec-97f9857702d9/pull/0.log" Oct 02 20:11:08 crc kubenswrapper[4832]: I1002 20:11:08.008554 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx_5d0750e6-dd9d-4a96-97ec-97f9857702d9/util/0.log" Oct 02 20:11:08 crc kubenswrapper[4832]: I1002 20:11:08.171748 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx_5d0750e6-dd9d-4a96-97ec-97f9857702d9/extract/0.log" Oct 02 20:11:08 crc kubenswrapper[4832]: I1002 20:11:08.182221 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx_5d0750e6-dd9d-4a96-97ec-97f9857702d9/util/0.log" Oct 02 20:11:08 crc kubenswrapper[4832]: I1002 20:11:08.189220 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0999b653a84702c2c11c13a5389e053aa7f0501a9a88eec9014235a37d66pdx_5d0750e6-dd9d-4a96-97ec-97f9857702d9/pull/0.log" Oct 02 20:11:08 crc kubenswrapper[4832]: I1002 20:11:08.416838 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9_1d90d357-6ff7-497d-a6c5-2dbd6af40493/util/0.log" Oct 02 20:11:08 crc kubenswrapper[4832]: I1002 20:11:08.533399 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9_1d90d357-6ff7-497d-a6c5-2dbd6af40493/util/0.log" Oct 02 20:11:08 crc kubenswrapper[4832]: I1002 20:11:08.580098 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9_1d90d357-6ff7-497d-a6c5-2dbd6af40493/pull/0.log" Oct 02 20:11:08 crc kubenswrapper[4832]: I1002 20:11:08.598487 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9_1d90d357-6ff7-497d-a6c5-2dbd6af40493/pull/0.log" Oct 02 20:11:09 crc kubenswrapper[4832]: I1002 20:11:09.028528 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9_1d90d357-6ff7-497d-a6c5-2dbd6af40493/extract/0.log" Oct 02 20:11:09 crc kubenswrapper[4832]: I1002 20:11:09.066181 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9_1d90d357-6ff7-497d-a6c5-2dbd6af40493/pull/0.log" Oct 02 20:11:09 crc kubenswrapper[4832]: I1002 20:11:09.085819 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2vgkk9_1d90d357-6ff7-497d-a6c5-2dbd6af40493/util/0.log" Oct 02 20:11:09 crc kubenswrapper[4832]: I1002 20:11:09.238511 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf_7c5c5779-5cc4-48b6-92ad-5c2e2248804d/util/0.log" Oct 02 20:11:09 crc kubenswrapper[4832]: I1002 20:11:09.487964 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf_7c5c5779-5cc4-48b6-92ad-5c2e2248804d/pull/0.log" Oct 02 20:11:09 crc kubenswrapper[4832]: I1002 20:11:09.493590 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf_7c5c5779-5cc4-48b6-92ad-5c2e2248804d/pull/0.log" Oct 02 20:11:09 crc kubenswrapper[4832]: I1002 20:11:09.574718 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf_7c5c5779-5cc4-48b6-92ad-5c2e2248804d/util/0.log" Oct 02 20:11:09 crc kubenswrapper[4832]: I1002 20:11:09.666841 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf_7c5c5779-5cc4-48b6-92ad-5c2e2248804d/util/0.log" Oct 02 20:11:09 crc kubenswrapper[4832]: I1002 20:11:09.703775 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf_7c5c5779-5cc4-48b6-92ad-5c2e2248804d/extract/0.log" Oct 02 20:11:09 crc kubenswrapper[4832]: I1002 20:11:09.708592 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8qlrf_7c5c5779-5cc4-48b6-92ad-5c2e2248804d/pull/0.log" Oct 02 20:11:09 crc kubenswrapper[4832]: I1002 20:11:09.924630 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2_dbdaf694-0aaf-4fd9-9e6d-01a3d8581364/util/0.log" Oct 02 20:11:10 crc kubenswrapper[4832]: I1002 20:11:10.077140 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2_dbdaf694-0aaf-4fd9-9e6d-01a3d8581364/util/0.log" Oct 02 20:11:10 crc kubenswrapper[4832]: I1002 20:11:10.099421 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2_dbdaf694-0aaf-4fd9-9e6d-01a3d8581364/pull/0.log" Oct 02 20:11:10 crc kubenswrapper[4832]: I1002 20:11:10.134527 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2_dbdaf694-0aaf-4fd9-9e6d-01a3d8581364/pull/0.log" Oct 02 20:11:10 crc kubenswrapper[4832]: I1002 20:11:10.344672 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2_dbdaf694-0aaf-4fd9-9e6d-01a3d8581364/extract/0.log" Oct 02 20:11:10 crc kubenswrapper[4832]: I1002 20:11:10.384186 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2_dbdaf694-0aaf-4fd9-9e6d-01a3d8581364/util/0.log" Oct 02 20:11:10 crc kubenswrapper[4832]: I1002 20:11:10.493626 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_b750ce2fcb78a523ce3c4c91d54ad8430abb37e936593acebfbbbfa601rcth2_dbdaf694-0aaf-4fd9-9e6d-01a3d8581364/pull/0.log" Oct 02 20:11:10 crc kubenswrapper[4832]: I1002 20:11:10.676864 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whm2b_0467cc9b-9752-4c8c-bde0-660d88dabfb9/extract-utilities/0.log" Oct 02 20:11:10 crc kubenswrapper[4832]: I1002 20:11:10.846144 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whm2b_0467cc9b-9752-4c8c-bde0-660d88dabfb9/extract-content/0.log" Oct 02 20:11:10 crc kubenswrapper[4832]: I1002 20:11:10.868885 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whm2b_0467cc9b-9752-4c8c-bde0-660d88dabfb9/extract-content/0.log" Oct 02 20:11:10 crc kubenswrapper[4832]: I1002 20:11:10.904070 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whm2b_0467cc9b-9752-4c8c-bde0-660d88dabfb9/extract-utilities/0.log" Oct 02 20:11:11 crc kubenswrapper[4832]: I1002 20:11:11.076621 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whm2b_0467cc9b-9752-4c8c-bde0-660d88dabfb9/extract-utilities/0.log" Oct 02 20:11:11 crc kubenswrapper[4832]: I1002 20:11:11.132727 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whm2b_0467cc9b-9752-4c8c-bde0-660d88dabfb9/extract-content/0.log" Oct 02 20:11:11 crc kubenswrapper[4832]: I1002 20:11:11.348326 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqqbj_e116f154-c5cb-480d-b397-1cd848496e21/extract-utilities/0.log" Oct 02 20:11:11 crc kubenswrapper[4832]: I1002 20:11:11.524017 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqqbj_e116f154-c5cb-480d-b397-1cd848496e21/extract-utilities/0.log" Oct 02 20:11:11 crc kubenswrapper[4832]: I1002 20:11:11.550970 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqqbj_e116f154-c5cb-480d-b397-1cd848496e21/extract-content/0.log" Oct 02 20:11:11 crc kubenswrapper[4832]: I1002 20:11:11.557364 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqqbj_e116f154-c5cb-480d-b397-1cd848496e21/extract-content/0.log" Oct 02 20:11:11 crc kubenswrapper[4832]: I1002 20:11:11.759142 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqqbj_e116f154-c5cb-480d-b397-1cd848496e21/extract-utilities/0.log" Oct 02 20:11:11 crc kubenswrapper[4832]: I1002 20:11:11.827626 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqqbj_e116f154-c5cb-480d-b397-1cd848496e21/extract-content/0.log" Oct 02 20:11:12 crc kubenswrapper[4832]: I1002 20:11:12.011425 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh_57268deb-95d5-4987-ab26-52f11e9182b4/util/0.log" Oct 02 20:11:12 crc kubenswrapper[4832]: I1002 20:11:12.232816 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh_57268deb-95d5-4987-ab26-52f11e9182b4/pull/0.log" Oct 02 20:11:12 crc kubenswrapper[4832]: I1002 20:11:12.267804 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh_57268deb-95d5-4987-ab26-52f11e9182b4/pull/0.log" Oct 02 20:11:12 crc kubenswrapper[4832]: I1002 20:11:12.285176 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh_57268deb-95d5-4987-ab26-52f11e9182b4/util/0.log" Oct 02 20:11:12 crc kubenswrapper[4832]: I1002 20:11:12.429421 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hqqbj_e116f154-c5cb-480d-b397-1cd848496e21/registry-server/0.log" Oct 02 20:11:12 crc kubenswrapper[4832]: I1002 20:11:12.478061 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-whm2b_0467cc9b-9752-4c8c-bde0-660d88dabfb9/registry-server/0.log" Oct 02 20:11:12 crc kubenswrapper[4832]: I1002 20:11:12.763492 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh_57268deb-95d5-4987-ab26-52f11e9182b4/extract/0.log" Oct 02 20:11:12 crc kubenswrapper[4832]: I1002 20:11:12.791386 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh_57268deb-95d5-4987-ab26-52f11e9182b4/util/0.log" Oct 02 20:11:12 crc kubenswrapper[4832]: I1002 20:11:12.806895 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cpl7vh_57268deb-95d5-4987-ab26-52f11e9182b4/pull/0.log" Oct 02 20:11:12 crc kubenswrapper[4832]: I1002 20:11:12.853594 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vpnm2_4deba2ec-10ea-48dd-b732-a924f01ab1b7/marketplace-operator/0.log" Oct 02 20:11:13 crc kubenswrapper[4832]: I1002 20:11:13.041113 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5jp2v_0d465b37-f6e4-48a2-bda2-efc7d3601131/extract-utilities/0.log" Oct 02 20:11:13 crc kubenswrapper[4832]: I1002 20:11:13.273212 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5jp2v_0d465b37-f6e4-48a2-bda2-efc7d3601131/extract-content/0.log" Oct 02 20:11:13 crc kubenswrapper[4832]: I1002 20:11:13.335171 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5jp2v_0d465b37-f6e4-48a2-bda2-efc7d3601131/extract-content/0.log" Oct 02 20:11:13 crc kubenswrapper[4832]: I1002 20:11:13.346738 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5jp2v_0d465b37-f6e4-48a2-bda2-efc7d3601131/extract-utilities/0.log" Oct 02 20:11:13 crc kubenswrapper[4832]: I1002 20:11:13.556721 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5jp2v_0d465b37-f6e4-48a2-bda2-efc7d3601131/extract-content/0.log" Oct 02 20:11:13 crc kubenswrapper[4832]: I1002 20:11:13.625191 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5jp2v_0d465b37-f6e4-48a2-bda2-efc7d3601131/extract-utilities/0.log" Oct 02 20:11:13 crc kubenswrapper[4832]: I1002 20:11:13.659349 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6rnh_4bbe1430-3664-4a03-97a8-5302998288ca/extract-utilities/0.log" Oct 02 20:11:13 crc kubenswrapper[4832]: I1002 20:11:13.794613 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5jp2v_0d465b37-f6e4-48a2-bda2-efc7d3601131/registry-server/0.log" Oct 02 20:11:13 crc kubenswrapper[4832]: I1002 20:11:13.925117 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6rnh_4bbe1430-3664-4a03-97a8-5302998288ca/extract-utilities/0.log" Oct 02 20:11:13 crc kubenswrapper[4832]: I1002 20:11:13.951633 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6rnh_4bbe1430-3664-4a03-97a8-5302998288ca/extract-content/0.log" Oct 02 20:11:13 crc kubenswrapper[4832]: I1002 20:11:13.971573 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6rnh_4bbe1430-3664-4a03-97a8-5302998288ca/extract-content/0.log" Oct 02 20:11:14 crc kubenswrapper[4832]: I1002 20:11:14.139873 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6rnh_4bbe1430-3664-4a03-97a8-5302998288ca/extract-utilities/0.log" Oct 02 20:11:14 crc kubenswrapper[4832]: I1002 20:11:14.175412 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6rnh_4bbe1430-3664-4a03-97a8-5302998288ca/extract-content/0.log" Oct 02 20:11:15 crc kubenswrapper[4832]: I1002 20:11:15.015515 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-p6rnh_4bbe1430-3664-4a03-97a8-5302998288ca/registry-server/0.log" Oct 02 20:11:26 crc kubenswrapper[4832]: I1002 20:11:26.740248 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-7c8cf85677-b4kkp_38be72b3-2875-4e11-895a-d7b229709e75/prometheus-operator/0.log" Oct 02 20:11:26 crc kubenswrapper[4832]: I1002 20:11:26.901330 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7c5488c7d6-95wfn_cf1d6e7f-c76f-4888-8465-3651cdd3c079/prometheus-operator-admission-webhook/0.log" Oct 02 20:11:26 crc kubenswrapper[4832]: I1002 20:11:26.938224 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7c5488c7d6-fp8hw_4c3e8b2c-bc7f-456f-b39b-a08d5333ce0c/prometheus-operator-admission-webhook/0.log" Oct 02 20:11:27 crc kubenswrapper[4832]: I1002 20:11:27.156347 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-cc5f78dfc-sgrwh_9e59cc84-d625-4121-956d-773c5be0d917/operator/0.log" Oct 02 20:11:27 crc kubenswrapper[4832]: I1002 20:11:27.168616 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-6584dc9448-6ftdp_28fbc8db-b613-4de9-a177-3f7c5be4d857/observability-ui-dashboards/0.log" Oct 02 20:11:27 crc kubenswrapper[4832]: I1002 20:11:27.323072 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-54bc95c9fb-wczsw_72b2ae10-6b68-4738-9474-41a9fa1f9f92/perses-operator/0.log" Oct 02 20:11:39 crc kubenswrapper[4832]: I1002 20:11:39.276768 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69857bc6ff-kl7qp_3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a/kube-rbac-proxy/0.log" Oct 02 20:11:39 crc kubenswrapper[4832]: I1002 20:11:39.342907 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-69857bc6ff-kl7qp_3bf08a9f-8e06-4a41-b5f6-2b15b89b6e3a/manager/0.log" Oct 02 20:11:56 crc kubenswrapper[4832]: I1002 20:11:56.875545 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 20:11:56 crc kubenswrapper[4832]: I1002 20:11:56.876178 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 20:12:02 crc kubenswrapper[4832]: E1002 20:12:02.941429 4832 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.180:38294->38.102.83.180:36377: write tcp 38.102.83.180:38294->38.102.83.180:36377: write: broken pipe Oct 02 20:12:25 crc kubenswrapper[4832]: I1002 20:12:25.113415 4832 scope.go:117] "RemoveContainer" containerID="dfd921a7c5b408958e6c41e639578e05dcdf9dcee69e4c110e02bd0200504124" Oct 02 20:12:26 crc kubenswrapper[4832]: I1002 20:12:26.875183 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 20:12:26 crc kubenswrapper[4832]: I1002 20:12:26.875674 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 20:12:50 crc kubenswrapper[4832]: I1002 20:12:50.936757 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9vwpr"] Oct 02 20:12:50 crc kubenswrapper[4832]: E1002 20:12:50.938055 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c34108-31f1-4c07-b284-33ddb91a2c9b" containerName="extract-content" Oct 02 20:12:50 crc kubenswrapper[4832]: I1002 20:12:50.938072 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c34108-31f1-4c07-b284-33ddb91a2c9b" containerName="extract-content" Oct 02 20:12:50 crc kubenswrapper[4832]: E1002 20:12:50.938102 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c34108-31f1-4c07-b284-33ddb91a2c9b" containerName="extract-utilities" Oct 02 20:12:50 crc kubenswrapper[4832]: I1002 20:12:50.938112 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c34108-31f1-4c07-b284-33ddb91a2c9b" containerName="extract-utilities" Oct 02 20:12:50 crc kubenswrapper[4832]: E1002 20:12:50.938149 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c34108-31f1-4c07-b284-33ddb91a2c9b" containerName="registry-server" Oct 02 20:12:50 crc kubenswrapper[4832]: I1002 20:12:50.938157 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c34108-31f1-4c07-b284-33ddb91a2c9b" containerName="registry-server" Oct 02 20:12:50 crc kubenswrapper[4832]: I1002 20:12:50.938445 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c34108-31f1-4c07-b284-33ddb91a2c9b" containerName="registry-server" Oct 02 20:12:50 crc kubenswrapper[4832]: I1002 20:12:50.940851 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:12:51 crc kubenswrapper[4832]: I1002 20:12:51.007543 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeab489e-1205-4334-a5cc-dd5d8a5cec51-catalog-content\") pod \"community-operators-9vwpr\" (UID: \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\") " pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:12:51 crc kubenswrapper[4832]: I1002 20:12:51.007914 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq4vn\" (UniqueName: \"kubernetes.io/projected/aeab489e-1205-4334-a5cc-dd5d8a5cec51-kube-api-access-qq4vn\") pod \"community-operators-9vwpr\" (UID: \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\") " pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:12:51 crc kubenswrapper[4832]: I1002 20:12:51.008099 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeab489e-1205-4334-a5cc-dd5d8a5cec51-utilities\") pod \"community-operators-9vwpr\" (UID: \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\") " pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:12:51 crc kubenswrapper[4832]: I1002 20:12:51.038327 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9vwpr"] Oct 02 20:12:51 crc kubenswrapper[4832]: I1002 20:12:51.110460 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeab489e-1205-4334-a5cc-dd5d8a5cec51-utilities\") pod \"community-operators-9vwpr\" (UID: \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\") " pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:12:51 crc kubenswrapper[4832]: I1002 20:12:51.110843 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeab489e-1205-4334-a5cc-dd5d8a5cec51-catalog-content\") pod \"community-operators-9vwpr\" (UID: \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\") " pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:12:51 crc kubenswrapper[4832]: I1002 20:12:51.110992 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq4vn\" (UniqueName: \"kubernetes.io/projected/aeab489e-1205-4334-a5cc-dd5d8a5cec51-kube-api-access-qq4vn\") pod \"community-operators-9vwpr\" (UID: \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\") " pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:12:51 crc kubenswrapper[4832]: I1002 20:12:51.111054 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeab489e-1205-4334-a5cc-dd5d8a5cec51-utilities\") pod \"community-operators-9vwpr\" (UID: \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\") " pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:12:51 crc kubenswrapper[4832]: I1002 20:12:51.111287 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeab489e-1205-4334-a5cc-dd5d8a5cec51-catalog-content\") pod \"community-operators-9vwpr\" (UID: \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\") " pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:12:51 crc kubenswrapper[4832]: I1002 20:12:51.138116 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq4vn\" (UniqueName: \"kubernetes.io/projected/aeab489e-1205-4334-a5cc-dd5d8a5cec51-kube-api-access-qq4vn\") pod \"community-operators-9vwpr\" (UID: \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\") " pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:12:51 crc kubenswrapper[4832]: I1002 20:12:51.266298 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:12:52 crc kubenswrapper[4832]: I1002 20:12:52.009004 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9vwpr"] Oct 02 20:12:52 crc kubenswrapper[4832]: I1002 20:12:52.199808 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vwpr" event={"ID":"aeab489e-1205-4334-a5cc-dd5d8a5cec51","Type":"ContainerStarted","Data":"5e0fe3e0c05bb2f9f3fa7ab4e740b710fd8876f39419b5904d1a0d53944f0a4c"} Oct 02 20:12:53 crc kubenswrapper[4832]: I1002 20:12:53.211836 4832 generic.go:334] "Generic (PLEG): container finished" podID="aeab489e-1205-4334-a5cc-dd5d8a5cec51" containerID="32d93ee4a18ee70291bfbebfecf794fc563dfd2da13a855a5e66f184930894a8" exitCode=0 Oct 02 20:12:53 crc kubenswrapper[4832]: I1002 20:12:53.211978 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vwpr" event={"ID":"aeab489e-1205-4334-a5cc-dd5d8a5cec51","Type":"ContainerDied","Data":"32d93ee4a18ee70291bfbebfecf794fc563dfd2da13a855a5e66f184930894a8"} Oct 02 20:12:55 crc kubenswrapper[4832]: I1002 20:12:55.238843 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vwpr" event={"ID":"aeab489e-1205-4334-a5cc-dd5d8a5cec51","Type":"ContainerStarted","Data":"221e4b4bcab9f925f1b1f90a273d956a5afd581adfa9a9b2b19bc2a13582220c"} Oct 02 20:12:56 crc kubenswrapper[4832]: I1002 20:12:56.249546 4832 generic.go:334] "Generic (PLEG): container finished" podID="aeab489e-1205-4334-a5cc-dd5d8a5cec51" containerID="221e4b4bcab9f925f1b1f90a273d956a5afd581adfa9a9b2b19bc2a13582220c" exitCode=0 Oct 02 20:12:56 crc kubenswrapper[4832]: I1002 20:12:56.249596 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vwpr" event={"ID":"aeab489e-1205-4334-a5cc-dd5d8a5cec51","Type":"ContainerDied","Data":"221e4b4bcab9f925f1b1f90a273d956a5afd581adfa9a9b2b19bc2a13582220c"} Oct 02 20:12:56 crc kubenswrapper[4832]: I1002 20:12:56.875300 4832 patch_prober.go:28] interesting pod/machine-config-daemon-hc6sg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 02 20:12:56 crc kubenswrapper[4832]: I1002 20:12:56.875638 4832 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 02 20:12:56 crc kubenswrapper[4832]: I1002 20:12:56.875678 4832 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" Oct 02 20:12:56 crc kubenswrapper[4832]: I1002 20:12:56.876377 4832 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c"} pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 02 20:12:56 crc kubenswrapper[4832]: I1002 20:12:56.876437 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerName="machine-config-daemon" containerID="cri-o://811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" gracePeriod=600 Oct 02 20:12:57 crc kubenswrapper[4832]: E1002 20:12:57.011618 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:12:57 crc kubenswrapper[4832]: I1002 20:12:57.264274 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vwpr" event={"ID":"aeab489e-1205-4334-a5cc-dd5d8a5cec51","Type":"ContainerStarted","Data":"c40519c3a034e62ffe810f212c1912e55cf2defbdab0888e5b69d320014d6e8f"} Oct 02 20:12:57 crc kubenswrapper[4832]: I1002 20:12:57.267041 4832 generic.go:334] "Generic (PLEG): container finished" podID="e93ac374-cf01-41ab-a628-5c2cb5de7437" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" exitCode=0 Oct 02 20:12:57 crc kubenswrapper[4832]: I1002 20:12:57.267085 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" event={"ID":"e93ac374-cf01-41ab-a628-5c2cb5de7437","Type":"ContainerDied","Data":"811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c"} Oct 02 20:12:57 crc kubenswrapper[4832]: I1002 20:12:57.267117 4832 scope.go:117] "RemoveContainer" containerID="7ef522b7b58d76f3efc9401215fda40f442def9ce3c07d5450ebdc9abcdb576e" Oct 02 20:12:57 crc kubenswrapper[4832]: I1002 20:12:57.267935 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:12:57 crc kubenswrapper[4832]: E1002 20:12:57.268285 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:12:57 crc kubenswrapper[4832]: I1002 20:12:57.297823 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9vwpr" podStartSLOduration=3.817483556 podStartE2EDuration="7.29780316s" podCreationTimestamp="2025-10-02 20:12:50 +0000 UTC" firstStartedPulling="2025-10-02 20:12:53.215602131 +0000 UTC m=+6730.185045013" lastFinishedPulling="2025-10-02 20:12:56.695921735 +0000 UTC m=+6733.665364617" observedRunningTime="2025-10-02 20:12:57.290967697 +0000 UTC m=+6734.260410609" watchObservedRunningTime="2025-10-02 20:12:57.29780316 +0000 UTC m=+6734.267246032" Oct 02 20:13:01 crc kubenswrapper[4832]: I1002 20:13:01.266560 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:13:01 crc kubenswrapper[4832]: I1002 20:13:01.266975 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:13:01 crc kubenswrapper[4832]: I1002 20:13:01.324722 4832 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:13:01 crc kubenswrapper[4832]: I1002 20:13:01.407530 4832 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:13:01 crc kubenswrapper[4832]: I1002 20:13:01.570229 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9vwpr"] Oct 02 20:13:03 crc kubenswrapper[4832]: I1002 20:13:03.333576 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9vwpr" podUID="aeab489e-1205-4334-a5cc-dd5d8a5cec51" containerName="registry-server" containerID="cri-o://c40519c3a034e62ffe810f212c1912e55cf2defbdab0888e5b69d320014d6e8f" gracePeriod=2 Oct 02 20:13:03 crc kubenswrapper[4832]: I1002 20:13:03.863824 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:13:03 crc kubenswrapper[4832]: I1002 20:13:03.953477 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq4vn\" (UniqueName: \"kubernetes.io/projected/aeab489e-1205-4334-a5cc-dd5d8a5cec51-kube-api-access-qq4vn\") pod \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\" (UID: \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\") " Oct 02 20:13:03 crc kubenswrapper[4832]: I1002 20:13:03.953576 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeab489e-1205-4334-a5cc-dd5d8a5cec51-catalog-content\") pod \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\" (UID: \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\") " Oct 02 20:13:03 crc kubenswrapper[4832]: I1002 20:13:03.953615 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeab489e-1205-4334-a5cc-dd5d8a5cec51-utilities\") pod \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\" (UID: \"aeab489e-1205-4334-a5cc-dd5d8a5cec51\") " Oct 02 20:13:03 crc kubenswrapper[4832]: I1002 20:13:03.954657 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeab489e-1205-4334-a5cc-dd5d8a5cec51-utilities" (OuterVolumeSpecName: "utilities") pod "aeab489e-1205-4334-a5cc-dd5d8a5cec51" (UID: "aeab489e-1205-4334-a5cc-dd5d8a5cec51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 20:13:03 crc kubenswrapper[4832]: I1002 20:13:03.976980 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeab489e-1205-4334-a5cc-dd5d8a5cec51-kube-api-access-qq4vn" (OuterVolumeSpecName: "kube-api-access-qq4vn") pod "aeab489e-1205-4334-a5cc-dd5d8a5cec51" (UID: "aeab489e-1205-4334-a5cc-dd5d8a5cec51"). InnerVolumeSpecName "kube-api-access-qq4vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.021460 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeab489e-1205-4334-a5cc-dd5d8a5cec51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aeab489e-1205-4334-a5cc-dd5d8a5cec51" (UID: "aeab489e-1205-4334-a5cc-dd5d8a5cec51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.056895 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq4vn\" (UniqueName: \"kubernetes.io/projected/aeab489e-1205-4334-a5cc-dd5d8a5cec51-kube-api-access-qq4vn\") on node \"crc\" DevicePath \"\"" Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.056941 4832 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeab489e-1205-4334-a5cc-dd5d8a5cec51-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.056954 4832 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeab489e-1205-4334-a5cc-dd5d8a5cec51-utilities\") on node \"crc\" DevicePath \"\"" Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.345100 4832 generic.go:334] "Generic (PLEG): container finished" podID="aeab489e-1205-4334-a5cc-dd5d8a5cec51" containerID="c40519c3a034e62ffe810f212c1912e55cf2defbdab0888e5b69d320014d6e8f" exitCode=0 Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.345153 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9vwpr" Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.345171 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vwpr" event={"ID":"aeab489e-1205-4334-a5cc-dd5d8a5cec51","Type":"ContainerDied","Data":"c40519c3a034e62ffe810f212c1912e55cf2defbdab0888e5b69d320014d6e8f"} Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.346105 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9vwpr" event={"ID":"aeab489e-1205-4334-a5cc-dd5d8a5cec51","Type":"ContainerDied","Data":"5e0fe3e0c05bb2f9f3fa7ab4e740b710fd8876f39419b5904d1a0d53944f0a4c"} Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.346127 4832 scope.go:117] "RemoveContainer" containerID="c40519c3a034e62ffe810f212c1912e55cf2defbdab0888e5b69d320014d6e8f" Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.382247 4832 scope.go:117] "RemoveContainer" containerID="221e4b4bcab9f925f1b1f90a273d956a5afd581adfa9a9b2b19bc2a13582220c" Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.408775 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9vwpr"] Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.412421 4832 scope.go:117] "RemoveContainer" containerID="32d93ee4a18ee70291bfbebfecf794fc563dfd2da13a855a5e66f184930894a8" Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.420357 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9vwpr"] Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.471085 4832 scope.go:117] "RemoveContainer" containerID="c40519c3a034e62ffe810f212c1912e55cf2defbdab0888e5b69d320014d6e8f" Oct 02 20:13:04 crc kubenswrapper[4832]: E1002 20:13:04.471564 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c40519c3a034e62ffe810f212c1912e55cf2defbdab0888e5b69d320014d6e8f\": container with ID starting with c40519c3a034e62ffe810f212c1912e55cf2defbdab0888e5b69d320014d6e8f not found: ID does not exist" containerID="c40519c3a034e62ffe810f212c1912e55cf2defbdab0888e5b69d320014d6e8f" Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.471609 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c40519c3a034e62ffe810f212c1912e55cf2defbdab0888e5b69d320014d6e8f"} err="failed to get container status \"c40519c3a034e62ffe810f212c1912e55cf2defbdab0888e5b69d320014d6e8f\": rpc error: code = NotFound desc = could not find container \"c40519c3a034e62ffe810f212c1912e55cf2defbdab0888e5b69d320014d6e8f\": container with ID starting with c40519c3a034e62ffe810f212c1912e55cf2defbdab0888e5b69d320014d6e8f not found: ID does not exist" Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.471634 4832 scope.go:117] "RemoveContainer" containerID="221e4b4bcab9f925f1b1f90a273d956a5afd581adfa9a9b2b19bc2a13582220c" Oct 02 20:13:04 crc kubenswrapper[4832]: E1002 20:13:04.472020 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"221e4b4bcab9f925f1b1f90a273d956a5afd581adfa9a9b2b19bc2a13582220c\": container with ID starting with 221e4b4bcab9f925f1b1f90a273d956a5afd581adfa9a9b2b19bc2a13582220c not found: ID does not exist" containerID="221e4b4bcab9f925f1b1f90a273d956a5afd581adfa9a9b2b19bc2a13582220c" Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.472061 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"221e4b4bcab9f925f1b1f90a273d956a5afd581adfa9a9b2b19bc2a13582220c"} err="failed to get container status \"221e4b4bcab9f925f1b1f90a273d956a5afd581adfa9a9b2b19bc2a13582220c\": rpc error: code = NotFound desc = could not find container \"221e4b4bcab9f925f1b1f90a273d956a5afd581adfa9a9b2b19bc2a13582220c\": container with ID starting with 221e4b4bcab9f925f1b1f90a273d956a5afd581adfa9a9b2b19bc2a13582220c not found: ID does not exist" Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.472089 4832 scope.go:117] "RemoveContainer" containerID="32d93ee4a18ee70291bfbebfecf794fc563dfd2da13a855a5e66f184930894a8" Oct 02 20:13:04 crc kubenswrapper[4832]: E1002 20:13:04.472445 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32d93ee4a18ee70291bfbebfecf794fc563dfd2da13a855a5e66f184930894a8\": container with ID starting with 32d93ee4a18ee70291bfbebfecf794fc563dfd2da13a855a5e66f184930894a8 not found: ID does not exist" containerID="32d93ee4a18ee70291bfbebfecf794fc563dfd2da13a855a5e66f184930894a8" Oct 02 20:13:04 crc kubenswrapper[4832]: I1002 20:13:04.472465 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32d93ee4a18ee70291bfbebfecf794fc563dfd2da13a855a5e66f184930894a8"} err="failed to get container status \"32d93ee4a18ee70291bfbebfecf794fc563dfd2da13a855a5e66f184930894a8\": rpc error: code = NotFound desc = could not find container \"32d93ee4a18ee70291bfbebfecf794fc563dfd2da13a855a5e66f184930894a8\": container with ID starting with 32d93ee4a18ee70291bfbebfecf794fc563dfd2da13a855a5e66f184930894a8 not found: ID does not exist" Oct 02 20:13:05 crc kubenswrapper[4832]: I1002 20:13:05.247181 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeab489e-1205-4334-a5cc-dd5d8a5cec51" path="/var/lib/kubelet/pods/aeab489e-1205-4334-a5cc-dd5d8a5cec51/volumes" Oct 02 20:13:10 crc kubenswrapper[4832]: I1002 20:13:10.223698 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:13:10 crc kubenswrapper[4832]: E1002 20:13:10.224684 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:13:21 crc kubenswrapper[4832]: I1002 20:13:21.223420 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:13:21 crc kubenswrapper[4832]: E1002 20:13:21.224214 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:13:36 crc kubenswrapper[4832]: I1002 20:13:36.222977 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:13:36 crc kubenswrapper[4832]: E1002 20:13:36.223847 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:13:47 crc kubenswrapper[4832]: I1002 20:13:47.223053 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:13:47 crc kubenswrapper[4832]: E1002 20:13:47.225868 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:13:57 crc kubenswrapper[4832]: I1002 20:13:57.003465 4832 generic.go:334] "Generic (PLEG): container finished" podID="35969de4-68f3-4956-bf07-eff642d64df3" containerID="ee15f09d0ce1d5e586afcd1b732deb158cf248b15847484f9e8d332eefbf27e9" exitCode=0 Oct 02 20:13:57 crc kubenswrapper[4832]: I1002 20:13:57.003563 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txx97/must-gather-9z9r4" event={"ID":"35969de4-68f3-4956-bf07-eff642d64df3","Type":"ContainerDied","Data":"ee15f09d0ce1d5e586afcd1b732deb158cf248b15847484f9e8d332eefbf27e9"} Oct 02 20:13:57 crc kubenswrapper[4832]: I1002 20:13:57.005409 4832 scope.go:117] "RemoveContainer" containerID="ee15f09d0ce1d5e586afcd1b732deb158cf248b15847484f9e8d332eefbf27e9" Oct 02 20:13:57 crc kubenswrapper[4832]: I1002 20:13:57.835868 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-txx97_must-gather-9z9r4_35969de4-68f3-4956-bf07-eff642d64df3/gather/0.log" Oct 02 20:13:59 crc kubenswrapper[4832]: I1002 20:13:59.223452 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:13:59 crc kubenswrapper[4832]: E1002 20:13:59.223814 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:14:11 crc kubenswrapper[4832]: I1002 20:14:11.686204 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-txx97/must-gather-9z9r4"] Oct 02 20:14:11 crc kubenswrapper[4832]: I1002 20:14:11.687117 4832 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-txx97/must-gather-9z9r4" podUID="35969de4-68f3-4956-bf07-eff642d64df3" containerName="copy" containerID="cri-o://ed94031672f8d64c08b7ad13b02657792c1d9561a8c8e04bfa1608c72f747df9" gracePeriod=2 Oct 02 20:14:11 crc kubenswrapper[4832]: I1002 20:14:11.701625 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-txx97/must-gather-9z9r4"] Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.185465 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-txx97_must-gather-9z9r4_35969de4-68f3-4956-bf07-eff642d64df3/copy/0.log" Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.186138 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/must-gather-9z9r4" Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.229029 4832 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-txx97_must-gather-9z9r4_35969de4-68f3-4956-bf07-eff642d64df3/copy/0.log" Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.229429 4832 generic.go:334] "Generic (PLEG): container finished" podID="35969de4-68f3-4956-bf07-eff642d64df3" containerID="ed94031672f8d64c08b7ad13b02657792c1d9561a8c8e04bfa1608c72f747df9" exitCode=143 Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.229471 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txx97/must-gather-9z9r4" Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.229479 4832 scope.go:117] "RemoveContainer" containerID="ed94031672f8d64c08b7ad13b02657792c1d9561a8c8e04bfa1608c72f747df9" Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.254076 4832 scope.go:117] "RemoveContainer" containerID="ee15f09d0ce1d5e586afcd1b732deb158cf248b15847484f9e8d332eefbf27e9" Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.297749 4832 scope.go:117] "RemoveContainer" containerID="ed94031672f8d64c08b7ad13b02657792c1d9561a8c8e04bfa1608c72f747df9" Oct 02 20:14:12 crc kubenswrapper[4832]: E1002 20:14:12.298246 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed94031672f8d64c08b7ad13b02657792c1d9561a8c8e04bfa1608c72f747df9\": container with ID starting with ed94031672f8d64c08b7ad13b02657792c1d9561a8c8e04bfa1608c72f747df9 not found: ID does not exist" containerID="ed94031672f8d64c08b7ad13b02657792c1d9561a8c8e04bfa1608c72f747df9" Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.298313 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed94031672f8d64c08b7ad13b02657792c1d9561a8c8e04bfa1608c72f747df9"} err="failed to get container status \"ed94031672f8d64c08b7ad13b02657792c1d9561a8c8e04bfa1608c72f747df9\": rpc error: code = NotFound desc = could not find container \"ed94031672f8d64c08b7ad13b02657792c1d9561a8c8e04bfa1608c72f747df9\": container with ID starting with ed94031672f8d64c08b7ad13b02657792c1d9561a8c8e04bfa1608c72f747df9 not found: ID does not exist" Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.298345 4832 scope.go:117] "RemoveContainer" containerID="ee15f09d0ce1d5e586afcd1b732deb158cf248b15847484f9e8d332eefbf27e9" Oct 02 20:14:12 crc kubenswrapper[4832]: E1002 20:14:12.298686 4832 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee15f09d0ce1d5e586afcd1b732deb158cf248b15847484f9e8d332eefbf27e9\": container with ID starting with ee15f09d0ce1d5e586afcd1b732deb158cf248b15847484f9e8d332eefbf27e9 not found: ID does not exist" containerID="ee15f09d0ce1d5e586afcd1b732deb158cf248b15847484f9e8d332eefbf27e9" Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.298717 4832 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee15f09d0ce1d5e586afcd1b732deb158cf248b15847484f9e8d332eefbf27e9"} err="failed to get container status \"ee15f09d0ce1d5e586afcd1b732deb158cf248b15847484f9e8d332eefbf27e9\": rpc error: code = NotFound desc = could not find container \"ee15f09d0ce1d5e586afcd1b732deb158cf248b15847484f9e8d332eefbf27e9\": container with ID starting with ee15f09d0ce1d5e586afcd1b732deb158cf248b15847484f9e8d332eefbf27e9 not found: ID does not exist" Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.313398 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vvgl\" (UniqueName: \"kubernetes.io/projected/35969de4-68f3-4956-bf07-eff642d64df3-kube-api-access-9vvgl\") pod \"35969de4-68f3-4956-bf07-eff642d64df3\" (UID: \"35969de4-68f3-4956-bf07-eff642d64df3\") " Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.313488 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/35969de4-68f3-4956-bf07-eff642d64df3-must-gather-output\") pod \"35969de4-68f3-4956-bf07-eff642d64df3\" (UID: \"35969de4-68f3-4956-bf07-eff642d64df3\") " Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.318561 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35969de4-68f3-4956-bf07-eff642d64df3-kube-api-access-9vvgl" (OuterVolumeSpecName: "kube-api-access-9vvgl") pod "35969de4-68f3-4956-bf07-eff642d64df3" (UID: "35969de4-68f3-4956-bf07-eff642d64df3"). InnerVolumeSpecName "kube-api-access-9vvgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.416154 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vvgl\" (UniqueName: \"kubernetes.io/projected/35969de4-68f3-4956-bf07-eff642d64df3-kube-api-access-9vvgl\") on node \"crc\" DevicePath \"\"" Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.496367 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35969de4-68f3-4956-bf07-eff642d64df3-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "35969de4-68f3-4956-bf07-eff642d64df3" (UID: "35969de4-68f3-4956-bf07-eff642d64df3"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 02 20:14:12 crc kubenswrapper[4832]: I1002 20:14:12.518915 4832 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/35969de4-68f3-4956-bf07-eff642d64df3-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 02 20:14:13 crc kubenswrapper[4832]: I1002 20:14:13.243475 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35969de4-68f3-4956-bf07-eff642d64df3" path="/var/lib/kubelet/pods/35969de4-68f3-4956-bf07-eff642d64df3/volumes" Oct 02 20:14:14 crc kubenswrapper[4832]: I1002 20:14:14.223797 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:14:14 crc kubenswrapper[4832]: E1002 20:14:14.224768 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:14:26 crc kubenswrapper[4832]: I1002 20:14:26.223523 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:14:26 crc kubenswrapper[4832]: E1002 20:14:26.224304 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:14:40 crc kubenswrapper[4832]: I1002 20:14:40.223859 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:14:40 crc kubenswrapper[4832]: E1002 20:14:40.224736 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:14:52 crc kubenswrapper[4832]: I1002 20:14:52.223718 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:14:52 crc kubenswrapper[4832]: E1002 20:14:52.225566 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.209900 4832 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls"] Oct 02 20:15:00 crc kubenswrapper[4832]: E1002 20:15:00.211196 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35969de4-68f3-4956-bf07-eff642d64df3" containerName="gather" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.211238 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="35969de4-68f3-4956-bf07-eff642d64df3" containerName="gather" Oct 02 20:15:00 crc kubenswrapper[4832]: E1002 20:15:00.211291 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeab489e-1205-4334-a5cc-dd5d8a5cec51" containerName="registry-server" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.211298 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeab489e-1205-4334-a5cc-dd5d8a5cec51" containerName="registry-server" Oct 02 20:15:00 crc kubenswrapper[4832]: E1002 20:15:00.211312 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeab489e-1205-4334-a5cc-dd5d8a5cec51" containerName="extract-content" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.211319 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeab489e-1205-4334-a5cc-dd5d8a5cec51" containerName="extract-content" Oct 02 20:15:00 crc kubenswrapper[4832]: E1002 20:15:00.211332 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeab489e-1205-4334-a5cc-dd5d8a5cec51" containerName="extract-utilities" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.211338 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeab489e-1205-4334-a5cc-dd5d8a5cec51" containerName="extract-utilities" Oct 02 20:15:00 crc kubenswrapper[4832]: E1002 20:15:00.211357 4832 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35969de4-68f3-4956-bf07-eff642d64df3" containerName="copy" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.211362 4832 state_mem.go:107] "Deleted CPUSet assignment" podUID="35969de4-68f3-4956-bf07-eff642d64df3" containerName="copy" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.211563 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="35969de4-68f3-4956-bf07-eff642d64df3" containerName="copy" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.211597 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeab489e-1205-4334-a5cc-dd5d8a5cec51" containerName="registry-server" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.211611 4832 memory_manager.go:354] "RemoveStaleState removing state" podUID="35969de4-68f3-4956-bf07-eff642d64df3" containerName="gather" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.212794 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.214640 4832 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.221650 4832 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.238998 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls"] Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.286856 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-config-volume\") pod \"collect-profiles-29323935-9nhls\" (UID: \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.287290 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-secret-volume\") pod \"collect-profiles-29323935-9nhls\" (UID: \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.287780 4832 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxnd7\" (UniqueName: \"kubernetes.io/projected/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-kube-api-access-lxnd7\") pod \"collect-profiles-29323935-9nhls\" (UID: \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.390622 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxnd7\" (UniqueName: \"kubernetes.io/projected/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-kube-api-access-lxnd7\") pod \"collect-profiles-29323935-9nhls\" (UID: \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.390747 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-config-volume\") pod \"collect-profiles-29323935-9nhls\" (UID: \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.390819 4832 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-secret-volume\") pod \"collect-profiles-29323935-9nhls\" (UID: \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.392439 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-config-volume\") pod \"collect-profiles-29323935-9nhls\" (UID: \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.397774 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-secret-volume\") pod \"collect-profiles-29323935-9nhls\" (UID: \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.407470 4832 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxnd7\" (UniqueName: \"kubernetes.io/projected/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-kube-api-access-lxnd7\") pod \"collect-profiles-29323935-9nhls\" (UID: \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" Oct 02 20:15:00 crc kubenswrapper[4832]: I1002 20:15:00.579964 4832 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" Oct 02 20:15:01 crc kubenswrapper[4832]: I1002 20:15:01.084937 4832 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls"] Oct 02 20:15:01 crc kubenswrapper[4832]: W1002 20:15:01.086392 4832 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34be03d2_b0d6_4cbf_8c1a_87eb43a9a0ea.slice/crio-7f3efbdb4dc31bf70797147ee93167de062fce3c201dd6bae4cd42627405e972 WatchSource:0}: Error finding container 7f3efbdb4dc31bf70797147ee93167de062fce3c201dd6bae4cd42627405e972: Status 404 returned error can't find the container with id 7f3efbdb4dc31bf70797147ee93167de062fce3c201dd6bae4cd42627405e972 Oct 02 20:15:01 crc kubenswrapper[4832]: I1002 20:15:01.835526 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" event={"ID":"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea","Type":"ContainerStarted","Data":"3e3feb953bc9f92a730d108d4c0c4b30b67b0d6bcdd1c9a2362dfab0ff487dc8"} Oct 02 20:15:01 crc kubenswrapper[4832]: I1002 20:15:01.835836 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" event={"ID":"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea","Type":"ContainerStarted","Data":"7f3efbdb4dc31bf70797147ee93167de062fce3c201dd6bae4cd42627405e972"} Oct 02 20:15:01 crc kubenswrapper[4832]: I1002 20:15:01.872295 4832 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" podStartSLOduration=1.872252763 podStartE2EDuration="1.872252763s" podCreationTimestamp="2025-10-02 20:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 20:15:01.857030831 +0000 UTC m=+6858.826473733" watchObservedRunningTime="2025-10-02 20:15:01.872252763 +0000 UTC m=+6858.841695635" Oct 02 20:15:02 crc kubenswrapper[4832]: I1002 20:15:02.863754 4832 generic.go:334] "Generic (PLEG): container finished" podID="34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea" containerID="3e3feb953bc9f92a730d108d4c0c4b30b67b0d6bcdd1c9a2362dfab0ff487dc8" exitCode=0 Oct 02 20:15:02 crc kubenswrapper[4832]: I1002 20:15:02.863998 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" event={"ID":"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea","Type":"ContainerDied","Data":"3e3feb953bc9f92a730d108d4c0c4b30b67b0d6bcdd1c9a2362dfab0ff487dc8"} Oct 02 20:15:03 crc kubenswrapper[4832]: I1002 20:15:03.222846 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:15:03 crc kubenswrapper[4832]: E1002 20:15:03.223176 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.290595 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.390755 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-config-volume\") pod \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\" (UID: \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\") " Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.391078 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxnd7\" (UniqueName: \"kubernetes.io/projected/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-kube-api-access-lxnd7\") pod \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\" (UID: \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\") " Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.391238 4832 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-secret-volume\") pod \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\" (UID: \"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea\") " Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.397219 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-config-volume" (OuterVolumeSpecName: "config-volume") pod "34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea" (UID: "34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.400039 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-kube-api-access-lxnd7" (OuterVolumeSpecName: "kube-api-access-lxnd7") pod "34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea" (UID: "34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea"). InnerVolumeSpecName "kube-api-access-lxnd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.400164 4832 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea" (UID: "34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.493774 4832 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-config-volume\") on node \"crc\" DevicePath \"\"" Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.493824 4832 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxnd7\" (UniqueName: \"kubernetes.io/projected/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-kube-api-access-lxnd7\") on node \"crc\" DevicePath \"\"" Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.493839 4832 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.885967 4832 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" event={"ID":"34be03d2-b0d6-4cbf-8c1a-87eb43a9a0ea","Type":"ContainerDied","Data":"7f3efbdb4dc31bf70797147ee93167de062fce3c201dd6bae4cd42627405e972"} Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.886007 4832 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f3efbdb4dc31bf70797147ee93167de062fce3c201dd6bae4cd42627405e972" Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.886048 4832 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29323935-9nhls" Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.940727 4832 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt"] Oct 02 20:15:04 crc kubenswrapper[4832]: I1002 20:15:04.949859 4832 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29323890-dvgnt"] Oct 02 20:15:05 crc kubenswrapper[4832]: I1002 20:15:05.249019 4832 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ed559e0-37b1-4a74-9e5f-7fd37c0d5830" path="/var/lib/kubelet/pods/1ed559e0-37b1-4a74-9e5f-7fd37c0d5830/volumes" Oct 02 20:15:16 crc kubenswrapper[4832]: I1002 20:15:16.223755 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:15:16 crc kubenswrapper[4832]: E1002 20:15:16.225776 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:15:25 crc kubenswrapper[4832]: I1002 20:15:25.309729 4832 scope.go:117] "RemoveContainer" containerID="c403e70c2b0edba9473f8540fd49becfe4fb7f998b2beedfbddc7f69a2ba77d5" Oct 02 20:15:25 crc kubenswrapper[4832]: I1002 20:15:25.346814 4832 scope.go:117] "RemoveContainer" containerID="efc62151f3dd22a07cd27a500329b6c4771e0d1a4c03177ba81041451d107b8e" Oct 02 20:15:27 crc kubenswrapper[4832]: I1002 20:15:27.223489 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:15:27 crc kubenswrapper[4832]: E1002 20:15:27.224405 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:15:40 crc kubenswrapper[4832]: I1002 20:15:40.223630 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:15:40 crc kubenswrapper[4832]: E1002 20:15:40.224528 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:15:54 crc kubenswrapper[4832]: I1002 20:15:54.222828 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:15:54 crc kubenswrapper[4832]: E1002 20:15:54.223717 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:16:05 crc kubenswrapper[4832]: I1002 20:16:05.231400 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:16:05 crc kubenswrapper[4832]: E1002 20:16:05.232192 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437" Oct 02 20:16:17 crc kubenswrapper[4832]: I1002 20:16:17.225331 4832 scope.go:117] "RemoveContainer" containerID="811cb0bfd5617afded3697ebe6739672e3889ff6deaa0ae23bec44b52c0d7b1c" Oct 02 20:16:17 crc kubenswrapper[4832]: E1002 20:16:17.226230 4832 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hc6sg_openshift-machine-config-operator(e93ac374-cf01-41ab-a628-5c2cb5de7437)\"" pod="openshift-machine-config-operator/machine-config-daemon-hc6sg" podUID="e93ac374-cf01-41ab-a628-5c2cb5de7437"